Skip to main content
Skip table of contents

Deployment Guide

This guide details how to deploy Expertflow CX on Kubernetes using Helm. It covers prerequisites (Kubernetes setup, namespaces, FQDNs, TLS/SSL), and provides step-by-step instructions for deploying all CX components and required external services (MongoDB, Redis, PostgreSQL, MinIO, Keycloak, Vault) via Helm charts. The guide also explains multi-tenancy setup, component configuration, and post-deployment checks to ensure a stable environment for both on-premises and cloud deployments.

Prerequisites

The following table describes the prerequisites for using Helm for deployment.

Item

Description

When changed

Kubernetes Setup

a standard and compatible release of kubernetes is available

A clean installed Kubernetes Engine . This can be performed using guide RKE2 Control plane Deployment

Wildcard Domain (FQDN)

A valid FQDN is required to deploy the solution for example “devops.ef.com
But for MTT deployment a valid wildcard FQDN (e.g *.expertflow.com) is required for multi-tenancy deployment. This allows automatic mapping of tenant subdomains like tenant1.expertflow.com, tenant2.expertflow.com

by default there is no FQDN associated, and the helm chart(s) will exit with failure if the default value is used.

External Components

All external components have their own helm charts available.

if you are using externally managed components such as mongo, minio, redis and postgresql, relevant values should be updated in the helm chart values. details are provided below.

TLS/SSL certificate

It is mandatory to have a valid SSL certificate already create in both expertflow and ef-external namespaces. Default Ingress certificate name is “ef-ingress-tls-secret"
For multi-tenancy, a wildcard certificate for the domain (e.g., *.expertflow.com) must be used.
The certificate and private key (server.crt and server.key) are provided by IT and must be created as a Kubernetes secret in both expertflow and ef-external namespaces.

It is by default required and must be created before the actual helm chart deployment. Update when certificate is renewed or replaced by IT.

Custom configurations

All components requiring custom changes should be updated in their respective values file

Mandatory, and upgrade of the helm chart is required when updated.

ingress controller

by default both resident and ef-cx helm charts are using nginx as ingress controller

if using other ingress controller for example nginx or traefik, update all the relevant tags and annotations to reflect appropriate values. details

EF CX Helm Chart

Global Chart Details

In addition to sub-chart details, below given are the details for this meta chart. Any key: value pair present in this file supersedes the values file in sub-chart's values file.

Section

Item

Details

default

global

ingressRouter

Wildcard FQDN used for the EF-CX Solution

“*.expertflow.com”

imageRegistry

default container registry to pull images from

"gitimages.expertflow.com"

ingressCertName

default ingress certificate secret name. must be created before install

"ef-ingress-tls-secret"

ingressClassName

ingress class name

“nginx”

commonIngressAnnotations

common annotations for all ingress resources

““

efCommonVars_IS_WRAP_UP_ENABLED

Common Environment Variable

true

efCommonVars_WRAPUP_TIME

Common Environment Variable

"60"

efCommonVars_DEFAULT_ROOM_NAME

Common Environment Variable

CC

efCommonVars_DEFAULT_ROOM_LABEL

Common Environment Variable

CC

efCommonVars_DEFAULT_ROOM_DESCRIPTION

Common Environment Variable

Contact Center Room

efCommonVars_CONVERSATION_SEARCH_WINDOW_HRS

Common Environment Variable

"24"

efCommonVars_TZ

Common Environment Variable

UTC

efCommonVars_MASK_ATTRIBUTES_PATH

Common Environment Variable

/sensitive.js

efCommonVars_LOGGING_CONFIG

Common Environment Variable

/logback/logback-spring.xml

efCommonVars_ROOM_IS_USER_ID

Common Environment Variable

false

clusterDomain

 

root domain for the cluster DNS

“cluster.local”

imageCredentials

registry

container image registry, must be same as global.imageRegistry

gitimages.expertflow.com

username

username for the registry

efcx

password

password for the user of the registry

RecRpsuH34yqp56YRFUb

email

email address for the registry config

devops@expertflow.com

efConnectionVars

 

Contains list of all the sub-charts related connection parameters

list of parameters.

sub-chart

 

 

 

enabled

enable of disable a sub-chart deployment. true | false

true

Image Pull Secret is created at runtime based on these variables, a valid dockerconfig in JSON format is created at runtime and added to the kubernetes engine as secret with the name of ef-gitlab-secret

All sub-charts are named after the component name  for which it is developed and its values are evaluated from meta chart’s values.yaml file 

Sub-Chart Details

All sub-charts have below given details available.

Sub-charts details

Parameters

Global parameters

Name

Description

Value

global.ingressRouter

Global FQDN mapping

""

global.ingressCertName

Ingress TLS Certificate secret must be created before deployment

""

global.ingressClassName

ingress class name for all the ingress resources deployed using this helm chart

""

global.commonIngressAnnotations

Common Annotations for all the ingress resources, add/update for individual resources if not common

{}

global.imageRegistry

default image registry to images from

""

global.imagePullSecrets

Global Docker registry secret names as an array

[]

global.compatibility.openshift.adaptSecurityContext

Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)

auto

Common parameters

Name

Description

Value

nameOverride

String to partially override COMPONENT___NAME.fullname template (will maintain the release name)

""

fullnameOverride

String to fully override COMPONENT___NAME.fullname template

""

namespaceOverride

String to fully override common.names.namespace

""

kubeVersion

Force target Kubernetes version (using Helm capabilities if not set)

""

clusterDomain

Kubernetes Cluster Domain

cluster.local

extraDeploy

Extra objects to deploy (value evaluated as a template)

[]

commonLabels

Add labels to all the deployed resources

{}

commonAnnotations

Add annotations to all the deployed resources

{}

diagnosticMode.enabled

Enable diagnostic mode (all probes will be disabled and the command will be overridden)

false

diagnosticMode.command

Command to override all containers in the the deployment(s)/statefulset(s)

["sleep"]

diagnosticMode.args

Args to override all containers in the the deployment(s)/statefulset(s)

["infinity"]

COMPONENT___NAME parameters

Name

Description

Value

image.registry

COMPONENT___NAME image registry

REGISTRY_NAME

image.repository

COMPONENT___NAME image repository

REPOSITORY_NAME/___COMPONENT___NAME___

image.digest

COMPONENT___NAME image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag

""

image.pullPolicy

COMPONENT___NAME image pull policy

IfNotPresent

image.pullSecrets

Specify docker-registry secret names as an array

[]

automountServiceAccountToken

Mount Service Account token in pod

false

hostAliases

Deployment pod host aliases

[]

command

Override default container command (useful when using custom images)

[]

args

Override default container args (useful when using custom images)

[]

extraEnvVars

Extra environment variables to be set on COMPONENT___NAME containers

[]

extraEnvVarsCM

ConfigMap with extra environment variables

""

extraEnvVarsSecret

Secret with extra environment variables

""

efConnectionVars

Configmap true false

false

efEnvironmentVars

ConfigMap true false

false

COMPONENT___NAME deployment parameters

Name

Description

Value

replicaCount

Number of COMPONENT___NAME replicas to deploy

1

revisionHistoryLimit

The number of old history to retain to allow rollback

10

updateStrategy.type

COMPONENT___NAME deployment strategy type

RollingUpdate

updateStrategy.rollingUpdate

COMPONENT___NAME deployment rolling update configuration parameters

{}

podLabels

Additional labels for COMPONENT___NAME pods

{}

podAnnotations

Annotations for COMPONENT___NAME pods

{}

podAffinityPreset

Pod affinity preset. Ignored if affinity is set. Allowed values: soft or hard

""

podAntiAffinityPreset

Pod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hard

""

nodeAffinityPreset.type

Node affinity preset type. Ignored if affinity is set. Allowed values: soft or hard

""

nodeAffinityPreset.key

Node label key to match Ignored if affinity is set.

""

nodeAffinityPreset.values

Node label values to match. Ignored if affinity is set.

[]

affinity

Affinity for pod assignment

{}

hostNetwork

Specify if host network should be enabled for COMPONENT___NAME pod

false

hostIPC

Specify if host IPC should be enabled for COMPONENT___NAME pod

false

dnsPolicy

Specifies the DNS policy for the COMPONENT___NAME pod

""

dnsConfig

Allows users more control on the DNS settings for a Pod. Required if dnsPolicy is set to None

{}

nodeSelector

Node labels for pod assignment. Evaluated as a template.

{}

tolerations

Tolerations for pod assignment. Evaluated as a template.

[]

priorityClassName

COMPONENT___NAME pods' priorityClassName

""

schedulerName

Name of the k8s scheduler (other than default)

""

terminationGracePeriodSeconds

In seconds, time the given to the COMPONENT___NAME pod needs to terminate gracefully

""

topologySpreadConstraints

Topology Spread Constraints for pod assignment

[]

podSecurityContext.enabled

Enabled COMPONENT___NAME pods' Security Context

false

podSecurityContext.fsGroupChangePolicy

Set filesystem group change policy

Always

podSecurityContext.supplementalGroups

Set filesystem extra groups

[]

podSecurityContext.fsGroup

Set COMPONENT___NAME pod's Security Context fsGroup

1001

podSecurityContext.sysctls

sysctl settings of the COMPONENT___NAME pods

[]

containerSecurityContext.enabled

Enabled containers' Security Context

false

containerSecurityContext.seLinuxOptions

Set SELinux options in container

nil

containerSecurityContext.runAsUser

Set containers' Security Context runAsUser

1001

containerSecurityContext.runAsGroup

Set containers' Security Context runAsGroup

1001

containerSecurityContext.runAsNonRoot

Set container's Security Context runAsNonRoot

true

containerSecurityContext.privileged

Set container's Security Context privileged

false

containerSecurityContext.readOnlyRootFilesystem

Set container's Security Context readOnlyRootFilesystem

true

containerSecurityContext.allowPrivilegeEscalation

Set container's Security Context allowPrivilegeEscalation

false

containerSecurityContext.capabilities.drop

List of capabilities to be dropped

["ALL"]

containerSecurityContext.seccompProfile.type

Set container's Security Context seccomp profile

RuntimeDefault

containerPorts

Array of additional container ports for the Nginx container

[]

resourcesPreset

Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).

none

resources

Set container requests and limits for different resources like CPU or memory (essential for production workloads)

{}

lifecycleHooks

Optional lifecycleHooks for the COMPONENT___NAME container

{}

startupProbe.enabled

Enable startupProbe

false

startupProbe.initialDelaySeconds

Initial delay seconds for startupProbe

30

startupProbe.periodSeconds

Period seconds for startupProbe

10

startupProbe.timeoutSeconds

Timeout seconds for startupProbe

5

startupProbe.failureThreshold

Failure threshold for startupProbe

6

startupProbe.successThreshold

Success threshold for startupProbe

1

livenessProbe.enabled

Enable livenessProbe

true

livenessProbe.initialDelaySeconds

Initial delay seconds for livenessProbe

30

livenessProbe.periodSeconds

Period seconds for livenessProbe

10

livenessProbe.timeoutSeconds

Timeout seconds for livenessProbe

5

livenessProbe.failureThreshold

Failure threshold for livenessProbe

6

livenessProbe.successThreshold

Success threshold for livenessProbe

1

readinessProbe.enabled

Enable readinessProbe

true

readinessProbe.initialDelaySeconds

Initial delay seconds for readinessProbe

5

readinessProbe.periodSeconds

Period seconds for readinessProbe

5

readinessProbe.timeoutSeconds

Timeout seconds for readinessProbe

3

readinessProbe.failureThreshold

Failure threshold for readinessProbe

3

readinessProbe.successThreshold

Success threshold for readinessProbe

1

autoscaling.enabled

Enable autoscaling for COMPONENT___NAME deployment

false

autoscaling.minReplicas

Minimum number of replicas to scale back

""

autoscaling.maxReplicas

Maximum number of replicas to scale out

""

autoscaling.targetCPU

Target CPU utilization percentage

""

autoscaling.targetMemory

Target Memory utilization percentage

""

extraVolumes

Array to add extra volumes

[]

extraVolumeMounts

Array to add extra mount

[]

serviceAccount.create

Enable creation of ServiceAccount for COMPONENT___NAME pod

false

serviceAccount.name

The name of the ServiceAccount to use.

""

serviceAccount.annotations

Annotations for service account. Evaluated as a template.

{}

serviceAccount.automountServiceAccountToken

Auto-mount the service account token in the pod

false

sidecars

Sidecar parameters

[]

sidecarSingleProcessNamespace

Enable sharing the process namespace with sidecars

false

initContainers

Extra init containers

[]

pdb.create

Created a PodDisruptionBudget

false

pdb.minAvailable

Min number of pods that must still be available after the eviction.

""

pdb.maxUnavailable

Max number of pods that can be unavailable after the eviction.

""

Traffic Exposure parameters

Name

Description

Value

service.type

Service type

ClusterIP

service.enabled

whether the service object should be created for this component

true

service.type

Type of the Service port exposed

ClusterIP

service.port

Port Number of the service

""

service.targetPort

targetPort for the container where this service will route the traffic to

""

service.portName

Name of the Service's port -- should be same as targetPort

""

service.protocol

Type of the protocol for this service TCP or UDP

TCP

service.nodePort

Valid if the type is set to NodePort -- range 30000 to 32676

""

service.clusterIP

COMPONENT___NAME service Cluster IP

""

service.extraPorts

Extra ports to expose (normally used with the sidecar value)

[]

service.sessionAffinity

Session Affinity for Kubernetes service, can be "None" or "ClientIP"

None

service.sessionAffinityConfig

Additional settings for the sessionAffinity

{}

service.annotations

Service annotations

{}

service.externalTrafficPolicy

Enable client source IP preservation

Cluster

networkPolicy.enabled

Specifies whether a NetworkPolicy should be created

false

networkPolicy.allowExternal

Don't require server label for connections

true

networkPolicy.allowExternalEgress

Allow the pod to access any range of port and all destinations.

true

networkPolicy.extraIngress

Add extra ingress rules to the NetworkPolicy

[]

networkPolicy.extraEgress

Add extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true)

[]

networkPolicy.ingressNSMatchLabels

Labels to match to allow traffic from other namespaces

{}

networkPolicy.ingressNSPodMatchLabels

Pod labels to match to allow traffic from other namespaces

{}

ingress.enabled

Set to true to enable ingress record generation

true

ingress.pathType

Ingress path type

ImplementationSpecific

ingress.apiVersion

Force Ingress API version (automatically detected if not set)

""

ingress.hostname

Default host for the ingress resource

fqdn.com

ingress.path

The Path to Nginx. You may need to set this to '/*' in order to use this with ALB ingress controllers.

""

ingress.annotations

Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.

{}

ingress.ingressClassName

Set the ingerssClassName on the ingress record for k8s 1.18+

nginx

ingress.extraHosts

The list of additional hostnames to be covered with this ingress record.

[]

ingress.extraPaths

Any additional arbitrary paths that may need to be added to the ingress under the main host.

nil

ingress.tlsSecretName

If you're providing your own certificates, please use this to add the certificates as secrets

{{ .Values.global.ingressCertName }}

ingress.extraRules

The list of additional rules to be added to this ingress record. Evaluated as a template

Add helm repository

CODE
helm repo add expertflow https://expertflow.github.io/charts/

update helm repo

CODE
helm repo update expertflow

Helm chart functional groups

CX helm charts are divided into functional groups.

Group

Description

Dependency

CX

serves the basic and core functionality of the CX Solution.

External Components

Web Channels

Provides CX enhadncements for digital Channels

CX

AgentDesk

Provides separate deployment for customers where AgentDesk is optional

CX

Campaigns

Functional group provides Campaigns Collaborations.

CX

Reporting

Reporting related to the CX

CX

Eleveo

Eleveo functional group

CX

CiscoScheduler

Cisco functional group

CX

mtt-single

Functional group to host all non-MTT components that are deployed separately for each tenant instance.

CX

Metabase

Reporting

CX

Prepare for CX Deployment

Step 1: Clone the Expertflow CX repository

CODE
git clone -b CX-5.0 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git CX-5.0
CODE
cd CX-5.0/kubernetes/

Step 2: Create Namespaces

  1. Create a namespace expertflow for all Expertflow components

Run the following command on the control-plane node.

CODE
kubectl create namespace expertflow

 Create a namespace ef-external for all the external elements of the Expertflow CX solution such as Mongo, Redis, MinIO, etc.

Run the following command on the control-plane node.

CODE
kubectl create namespace ef-external

Ingress Controller Selection

  • Default ingressClass is set to “nginx” in all helm charts' global section. if you prefer to use other ingress controller, please update the ingressClassName to appropriate value.

  • All helm charts served at expertflow helm repository ( CX groups/components and external components ) by default are compatible with ingress-nginx ingress controller using ingress-nginx annotations. Should there be requirement for any other ingress controller like traefik, HA-Proxy or contour etc, please adjust the annotations for all components accordingly. A coordinated guide for using Traefik as Ingress Controller is available for CX solution’s compatibility.

Add TLS Certificates

  • For Self Signed please use this guide in both ef-external and expertflow namespaces (Use for lab VMs)

  • (Only For Multitenancy)
    For multi-tenancy deployments, a wildcard SSL certificate is required (e.g., *.expertflow.com).
    The certificate (server.crt) and private key (server.key) will be provided by the IT department.

    You must create a Kubernetes secret with these files in both namespaces (expertflow & ef-external)

    The default secret name must be ef-ingress-tls-secret

    CODE
    kubectl -n expertflow create secret tls ef-ingress-tls-secret --key  server.key --cert server.crt
    
    kubectl -n ef-external create secret tls ef-ingress-tls-secret --key  server.key --cert server.crt
  • For Commercial Certificates, please import them as tls.crt and tls.key and create secret with the name of ef-ingress-tls-secret in both ef-external and expertflow namespaces

  • For 21t Lets Encrypt SSL for EFCX (Use for any VM other than lab i.e AWS, Contabo, etc.)

NOTE: When using LE based TLS Certificates, you will have to enable correct annotations in all the relevant values file.

sed -i -e 's/#cert-manager.io\/cluster-issuer: /cert-manager.io\/cluster-issuer: /g' helm/keycloak/values.yaml to enable it.

sed -i -e 's/#cert-manager.io\/cluster-issuer: /cert-manager.io\/cluster-issuer: /g' helm/apisix/values.yaml to enable it

This procedure is required for both externals and all CX group charts being deployed.

Step 3: Apply Image Pull secret

  1. Run the following commands for applying ImagePullSecrets of Expertflow CX images.

CODE
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-external.yaml

Create a directory to hold values files for all the helm charts.

CODE
mkdir helm-values

Custom Password Interpolations

Below are the interpolations when using custom or not-default password for mongodb, minio, redis, postgresql and activeMQ

Component with custom password

update required in

MongoDB

  1. Update CX -> values.yaml -> efConnectionVars -> MONGODB_PASSWORD

PostgreSQL

  1. Update keycloak -> keycloak-custom-values.yaml -> externalDatabase -> password

  2. Update CX -> values.yaml -> license-manager -> extraEnvVars-> DB_PASS

minio

  1. Update CX -> values.yaml -> file-engin -> extraEnvVars -> ACCESSKEY,SECRETKEY

Redis (ACL enabled)

  1. Update CX -> values.yaml -> efConnectionVars ->

  2. update ActiveMQ -> active-custom-values.yaml -> extraEnvVars -> REDIS_PASSWORD

keycloak

N/A

activeMQ

N/A

Setup SQL Database

 Expertflow CX requires any of the following PostgreSQL for Expertflow CX deployment for storing configuration data.

If you are deploying external components with provided TLS certificates, you must run the following command before deployment

CODE
kubectl apply -f pre-deployment/static-tls

PostgreSQL RECOMMENDED

 

 

If you do not have PostgreSQL in your environment, create Config-Map of PostgreSQL to create necessary databases and preload it with bootstrap configurations.

CODE
kubectl -n ef-external  create configmap ef-postgresql-license-manager-cm --from-file=./pre-deployment/licensemanager/licensemanager.sql

download the values.yaml file locally to customize the parameter values.

CODE
helm show values expertflow/postgresql --version 5.0 > helm-values/ef-postgresql-custom-values.yaml

Update the following values file helm-values/ef-postgresql-custom-values.yaml as mentioned below:-

CODE
auth:
  password: "<CHANGE_PASSWORD>"

For Worker HA deployment, add the following tolerations:-

CODE
  tolerations: 
    - key: "node.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being not-ready

Deploy the postgresql

BASH
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-postgresql-custom-values.yaml  ef-postgresql expertflow/postgresql --version 5.0

For managed Postgresql, see this guide for configuring PostgreSQL for Expertflow CX.

Deploy CX External Components

Expertflow CX requires the following 3rd party components.

Cache - Redis(ACL enabled)

Key-Values based Caching engine, used by most of the EF-CX components.

MongoDB

NoSQL Database, maintains and serves as primary back store for EF-CX solution.

MinIO

S3 compliant object storage.

IAM (Keycloak)

Realm based auth management tool.

You may use them from your existing environment or from a cloud provider .

Setup IAM (Keycloak)

Prerequisites

Before proceeding with the keycloak deployment, please update the backend database connection string parameters ( when using non-default passwords )

clone the values file and update the parameter values

CODE
helm show values expertflow/keycloak --version 5.0 > helm-values/ef-keycloak-custom-values.yaml

edit helm-values/ef-keycloak-custom-values.yaml and update the password for postgresql database

CODE
global:
  ingressRouter: <DEFAULT-FQDN>
externalDatabase:
  password: "Expertflow123"

Default Keycloak deployment uses postgresql running inside the same kubernetes cluster. When using managed postgresql database instance, update above parameters with relevant information

For Worker HA deployments, add the following tolerations:-

CODE
tolerations:
    - key: "node.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being not-ready

IAM (Keycloak) Deployment

IAM (Keycloak) is used as the centralized authentication and authorization component for Expertflow CX. Follow these steps to setup KeyCloak.

Now, deploy Keycloak by running the following command

CODE
helm upgrade --install=true  --debug --namespace=ef-external  --values=helm-values/ef-keycloak-custom-values.yaml keycloak expertflow/keycloak --version 5.0

Check the Keycloak installation status. You can check the status of deployment by using the following command:

CODE
kubectl -n ef-external rollout status sts keycloak

Setup MongoDB

Expertflow CX uses MongoDB for storing all CX events, activities, and some configuration data as well.

Skip this step if you already have MongoDB in your environment that can be used by Expertflow CX. For using MongoDB from a managed environment, see this guide for necessary configurations.

Clone the values file to update the parameter values

CODE
helm show values expertflow/mongodb --version 5.0 > helm-values/ef-mongodb-custom-values.yaml

Update the following values file helm-values/ef-mongodb-custom-values.yaml as mentioned below

CODE
auth:
  rootPassword: "Expertflow123"

For Worker HA deployments, add the following tolerations:-

CODE
tolerations:
    - key: "node.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being not-ready

Deploy MongoDB by running the following command.

CODE
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-mongodb-custom-values.yaml mongo expertflow/mongodb --version 5.0

Check the MongoDB deployment status by running the following command:

CODE
kubectl -n ef-external rollout status sts mongo-mongodb

Setup MinIO as S3 Storage

Expertflow CX uses MinIO for storing files exchanged between agents, customers, and/or bots. Install using Helm using following command:

Clone the values file for updating the parameter values

CODE
helm show values expertflow/minio --version 5.0 > helm-values/ef-minio-custom-values.yaml

Update the minio helm chart helm-values/ef-minio-custom-values.yaml files with the required ACCESSKEY and PASSKEY values

CODE
auth:
  rootUser: minioadmin
  rootPassword: "minioadmin"

Deploy the minio helm chart

CODE
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-minio-custom-values.yaml minio expertflow/minio --version 5.0

Wait for the minio deployment to get ready

CODE
kubectl -n ef-external  rollout status deployment  minio --timeout=5m

Digital Channel Icons Bootstrapping

Proceed with icons bootstrapping.

CODE
kubectl apply -f scripts/minio-helper.yaml
CODE
kubectl -n ef-external --timeout=90s wait --for=condition=ready pod minio-helper
CODE
kubectl -n ef-external cp post-deployment/data/minio/bucket/default minio-helper:/tmp/
CODE
kubectl -n ef-external cp scripts/icon-helper.sh minio-helper:/tmp/
CODE
kubectl -n ef-external exec -it minio-helper -- /bin/sh /tmp/icon-helper.sh
CODE
kubectl delete -f scripts/minio-helper.yaml

Setup Redis

CX uses Redis for storing active system state of most of the CX objects. Redis is deployed with Access Control Lists (ACLs) to manage multiple users and credentials securely.

Clone the values file to update the parameter values

CODE
helm show values expertflow/redis --version 5.0 > helm-values/ef-redis-custom-values.yaml

Update the following values helm-values/ef-redis-custom-values.yaml as mentioned below:-

CODE
auth:
  password: "Expertflow123"  # Change this to match the requirements  

For Worker HA deployments, add the following tolerations:-

CODE
tolerations:
    - key: "node.kubernetes.io/unreachable"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
    - key: "node.kubernetes.io/not-ready"
      operator: "Exists"
      effect: "NoExecute"
      tolerationSeconds: 60 # Evict after 60 seconds of being not-ready 

Create Redis ACL Secret

CODE
kubectl -n ef-external create secret generic ef-redis-acl-secret --from-literal=superuser=Expertflow464

Run the following command to deploy Redis.

CODE
helm upgrade --install=true  --namespace=ef-external --values=helm-values/ef-redis-custom-values.yaml  redis expertflow/redis --version 5.0

Setup Application Gateway (APISIX)

Clone the apisix values.yaml file

CODE
helm show values expertflow/apisix --version 5.0  > helm-values/apisix-custom-values.yaml

update the apisix-custom-values.yaml file for given parameters

CODE
global:
  ingressRouter: "*.expertflow.com"   # * for MTT & for prem replace FQDN
  ingressClassName: "nginx"
  ingressTlsCertName: "ef-ingress-tls-secret"

Deploy the apisix using updated custom-values.yaml file

CODE
helm upgrade --install --namespace ef-external --values helm-values/apisix-custom-values.yaml apisix expertflow/apisix --version 5.0

Verify the deployment of the apisix

CODE
kubectl -n ef-external get deploy

For MTT Setup Nginx Router for Multi deployments Routing (Non-MTT Components)
This file typically includes: (Service, Configmap & Deployment)
Run the following command to deploy the tenant router:

CODE
kubectl -n expertflow apply -f pre-deployment/nginx-router/nginx-router-manifests.yaml

Setup CX Bus (ActiveMQ)

Clone the values file to update the parameters required

CODE
helm show values expertflow/activemq --version 5.0 > helm-values/ef-activemq-custom-values.yaml
CODE
helm upgrade --install=true  --namespace=ef-external --values=helm-values/ef-activemq-custom-values.yaml activemq expertflow/activemq --version 5.0

CX Clamav

Clamav is an optional scanning service to scan the files before uploading to the file engine. You can enable/disable the scanning in the file engine’s environment variable IS_SCAN_ENABLED; by default, it’s enabled.

Customise the deployment by fetching the values.yaml file and edit it as per the requirements.

CODE
helm show values expertflow/clamav --version 5.0   > helm-values/cx-clamav-values.yaml

Edit/update the values file helm-values/cx-clamav-values.yaml with

CODE
global:
  ingressRouter: <DEFAULT-FQDN>

Deploy the Clamav helm chart by

CODE
helm upgrade --install --namespace ef-external --set global.efCxReleaseName="ef-cx"  clamav --debug --values helm-values/cx-clamav-values.yaml   helm/clamav --version 5.0 

Setup Vault
Copy mongo-mongodb-ca from ef-external to vault namespace

CODE
k create namespace vault
CODE
kubectl get secret mongo-mongodb-ca -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: vault/' | kubectl create -f -

Customise values.yaml

Customise values.yaml

You must first edit ef-vault-custom-values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
helm show values expertflow/vault --version 5.0 > helm-values/ef-vault-custom-values.yaml

Use the following vault configuration guide

Use the following vault dynamic database configuration guide

Deploy CX Components

Custom Configuration

For detailed guidelines on applying environment-specific configurations using custom values.yaml layering, refer to the CX Helm Chart Custom Configuration Strategy guide.
SSl/TLS Import in Namespaces

Transfer the Mongo, Redis, PostgreSQL and ActiveMQ Certificates from the ef-external namespace

CODE
kubectl get secret mongo-mongodb-ca -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret redis-crt -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret ef-postgresql-crt -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret activemq-tls -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -

CX Core

Setup default translation file for customer widget

CODE
kubectl -n expertflow  create configmap ef-widget-translations-cm --from-file=pre-deployment/app-translations/customer-widget/i18n/

Apply ConfigMap to enable log masking for all components in expertflow namespace:-

CODE
kubectl apply -f pre-deployment/logback/
kubectl -n expertflow create configmap ef-logback-cm --from-file=pre-deployment/logback/logback-spring.xml

Setup graphql schemas and mongodb rules configmaps

CODE
kubectl create configmap -n expertflow conversation-manager-graphql-schemas --from-file=pre-deployment/conversation-manager/graphql/schemas
CODE
kubectl create configmap -n expertflow conversation-manager-graphql-mongodb-rules --from-file=pre-deployment/conversation-manager/graphql/graphql-mongodb-rules.json
CODE
kubectl create configmap -n expertflow routing-engine-graphql-schemas --from-file=./pre-deployment/routing-engine/graphql/schemas
CODE
kubectl create configmap -n expertflow routing-engine-graphql-memory-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-memory-rules.json
CODE
kubectl create configmap -n expertflow routing-engine-graphql-redis-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-redis-rules.json

Create and Customise ef-cx-custom-values.yaml

Customise ef-cx-custom-values.yaml

You must first create custom values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to create a new file:

CODE
vi helm-values/ef-cx-custom-values.yaml 

You can extend the same ef-cx-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/cx --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

For Single tenant, update the following configuration in ef-cx-custom-values.yaml

  • global:
    ingressRouter: <CUSTOM-FQDN>

  • In EF-Connection Vars you need to update the following vars as per your valid domain & DBs

    CODE
    ROOT_DOMAIN: "<TenantID>" // your tenantId
    ENABLE_CLOUD_MANAGED_CONNECTIONS: "false" // if using managed services, set to true
    MONGODB_URI_PREFIX: "mongodb" // set to mongodb+srv if using DNS Seedlist to manage replicas
  • In real time reports change the following extraEnvVars as per your reporting DB

    • DATASOURCE_URL

    • DATASOURCE_USERNAME

    • DATASOURCE_PASSWORD

  • Below variables are added in cx-tenant service

    CODE
    #Modify below values according to the cloud instance
    - name: ENCRYPTION_KEY
      value: "6f3a2f95b7c0e4b1f37c0a9df8b68d7ea7d5bfbf41e2d88b3b9b55f4a6d1c2f3"
    - name: AZURE_STORAGE_ACCOUNT
      value: "efcxblobstorage"
    - name: AZURE_STORAGE_KEY
      value: "R/RYZ+knJWlYKr6fzWSLRSoauiY7/62K1n7kZ80d0zWPYqaZabokDCbJjFMgL20YhYYmGD4LxDre+AStFhKsqA=="
    - name: FS_URL
      value: "http://20.123.60.36:8000/add-domain" //Replace values with voice

For Multi-tenant, update the following configuration in ef-cx-custom-values.yaml

  • global:
    ingressRouter: <*.expertflow.com>

  • In EF-Connection Vars you need to update the following vars as per you valid domain & DBs

    CODE
    ROOT_DOMAIN: "expertflow.com" // your root domain
    ENABLE_CLOUD_MANAGED_CONNECTIONS: "false"
    MONGODB_URI_PREFIX: "mongodb"
  • In real time reports change the following extraEnvVars as per your reporting DB

    • DATASOURCE_URL

    • DATASOURCE_USERNAME

    • DATASOURCE_PASSWORD

  • Below variables are added in cx-tenant service

    CODE
    #Modify below values according to the cloud instance
    - name: ENCRYPTION_KEY
      value: "6f3a2f95b7c0e4b1f37c0a9df8b68d7ea7d5bfbf41e2d88b3b9b55f4a6d1c2f3"
    - name: AZURE_STORAGE_ACCOUNT
      value: "efcxblobstorage"
    - name: AZURE_STORAGE_KEY
      value: "R/RYZ+knJWlYKr6fzWSLRSoauiY7/62K1n7kZ80d0zWPYqaZabokDCbJjFMgL20YhYYmGD4LxDre+AStFhKsqA=="
    - name: FS_URL
      value: "http://20.123.60.36:8000/add-domain" //Replace values with voice

For MTT change the CONTROLLER_URL in conversation manager

CODE
- name: CONTROLLER_URL
  value: "http://tenantId-conversation-studio-svc.tenantId.svc:1880"

For MTT Disable the flag of conversation studio from ef-cx-custom file

CODE
conversation-studio:
  enabled: false

Deploy the CX Core using default values.

CODE
helm upgrade --install --namespace expertflow --create-namespace   ef-cx  --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 5.0

“ef-cx” in above command is the release name which will referenced in all subsequent functional groups' deployments.

check the status of CX components

CODE
kubectl -n expertflow get pods

Copy the icons to the cx-tenant pod once deployed in the icons directory to persist

CODE
kubectl -n expertflow cp post-deployment/data/minio/bucket/default <cx-tenant-pod-name>:/icons

CX Agent Desk

Setup default translation file for Agent Desk

CODE
kubectl -n expertflow  create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n

Setup default canned messages translations file for Agent Desk

CODE
kubectl -n expertflow  create configmap ef-canned-messages-cm --from-file=pre-deployment/app-translations/unified-agent/canned-messages

Apply CRM ConfigMap for Agent Desk

CODE
kubectl -n expertflow create configmap ef-crm-service-cm --from-file=pre-deployment/crm-service/

Note: For Singletenant

Update the FQDN of the machine against url in supervisor_dashboard_cim_json_api field in file post-deployment/config/grafana/supervisor-dashboards/datasource.yml

CODE
############################################  JSON API CONFIGURATION ##########################################
  - name: supervisor_dashboard_cim_json_api
    url : https://devops234.ef.com ## Update with the FQDN of the machine

############################################  INFINITY API PLUGIN CONFIGURATION ##########################################
  - name: infinity_cim_json_api
    jsonData:
      allowedHosts: #Add the FQDN of the registered Tenants
        - "https://example1.com"

Note: For Multi-tenant

  1. Update the FQDN of CX_TENANT_URL against url field under supervisor_dashboard_cim_json_api in file post-deployment/config/grafana/supervisor-dashboards/datasource.yml.

  2. Add the FQDN of all tenants against allowedHosts field under infinity_cim_json_api in file post-deployment/config/grafana/supervisor-dashboards/datasource.yml

CODE
############################################  JSON API CONFIGURATION ##########################################
 - name: supervisor_dashboard_cim_json_api
   url : https://devops234.ef.com ##Update with the CX_TENANT_URL FQDN. Note: Don't use svc name

############################################  INFINITY API PLUGIN CONFIGURATION ##########################################
  - name: infinity_cim_json_api
    jsonData:
      allowedHosts: #Add the FQDN of all the registered Tenants
        - "*"  
        - "https://example1.com"
        - "https://example2.com"
        - "https://example3.com"
        - "https://example4.com"

Apply the Grafana data-source manifest.

CODE
kubectl -n expertflow  create secret generic ef-grafana-datasource-secret --from-file=post-deployment/config/grafana/supervisor-dashboards/datasource.yml

Apply Grafana provider manifest.

CODE
kubectl -n expertflow create cm ef-grafana-dashboard-provider-cm --from-file=post-deployment/config/grafana/supervisor-dashboards/dashboard.yml

Apply Config-map for the dashboards files using the steps below.

BASH
###### SUPERVISOR DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-supervisor-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM.json

###### AGENT DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM.json

###### AGENT TEAMS DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-teams-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_teams_dashboard.json

###### AGENT PERFORMANCE DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-performance-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_performance_dashboard.json

###### SOCIAL MEDIA PERFORMANCE DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-social-media-performance-trend-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/social_media_performance_trend_dashboard.json

###### TEAM STATISTICS DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-team-statistics-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/team_statistics_dashboard.json 

Install the Agent Desk using helm chart

Customise values.yaml

Customise values.yaml

You must first edit values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
vi helm-values/cx-agent-desk-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

🔁 Replace <CUSTOM-FQDN> and <IP>with your actual domain & IP, e.g., devops.example.com.

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 
CODE
grafana:  
  hostAliases: 
    - ip: "<IP>"
      hostnames:
        - "{{ .Values.global.ingressRouter }}"

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-agent-desk-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/agent-desk --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

CODE
helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-agent-desk  --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 5.0

CX Channels

Customise values.yaml

Customise values.yaml

You must first edit values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
vi helm-values/cx-channels-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-channels-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/channels --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Deploy the Channels helm chart by

CODE
helm upgrade --install --namespace expertflow  --set global.efCxReleaseName="ef-cx"   --debug   cx-channels --values  helm-values/cx-channels-custom-values.yaml  expertflow/channels --version 5.0 

CX Campaigns

Customise values.yaml

Customise values.yaml

You must first edit values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
vi helm-values/cx-campaigns-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-campaigns-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/campaigns --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Deploy the CX Campaigns helm chart by

CODE
helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 5.0 

Make sure to assign the role conversation-studio-admin to the Keycloak user admin.
If you want to create an explicit user for campaigns, update the user in the campaigns siteEnvVars

For MTT Setup Non-MTT Components (Per Tenant)

To deploy CX Campaigns Studio, Conversation Studio, and QM for a tenant, use the mtt-single Helm chart.
For MTT, you have to disable Campaigns Studio, Conversation Studio & QM-Backend from existing charts or custom values, by first setting the enabled key to false for these components in their respective charts.

CODE
enabled : false

For QM Backend, we need to manually create the PostgreSQL database first. The steps to create the database are mentioned in this guide.

First you need to create the namespace for new tenant

CODE
kubectl create namespace <tenant-name>

For MTT, you have to transfer the Mongo, Redis, PostgreSQL Certificates from the ef-external namespace to newly created tenant namespace.
please change <namespace> with the specific tenant namespace.

CODE
kubectl get secret mongo-mongodb-ca -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get secret redis-crt -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get secret ef-postgresql-crt -n ef-external  -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get configmap ef-logback-cm -n expertflow  -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -
kubectl get configmap ef-cx-efconnections-cm -n expertflow  -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -
kubectl get secret ef-gitlab-secret -n expertflow  -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -

Customise values.yaml

Customise values.yaml

You must first edit values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
vi helm-values/mtt-single-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same mtt-single-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/MTT-single --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Update namespace and apply MTT-single helm chart

CODE
helm upgrade --install --namespace <tenant-ns>  --debug  <tenant-id> --values helm-values/mtt-single-custom-values.yaml  expertflow/MTT-single --version 5.0

CX Reporting

Configure TLS connection for MySQL

For MTT, each tenant has their dedicated namespace where the respective commands will be deployed. For on-prem, the <tenant-namespace> will be expertflow

Get the MySQL key-store (.jsk) & certificate(.cert) files for mysql. The .jsk file is required for configuration of the reporting connector, whereas the .cert file is required for Apache Superset SSL configuration. Skeleton Project (cim-solution) already contains the default .jks files in the keystore directory. Replace the mykeystore.jks file acquired with the actual file in cim-solution/kubernetes/pre-deployment/reportingConnector/keystore/ directory.

Create keystore.jks used for MySQL TLS

CODE
kubectl create configmap -n <tenant-namespace> ef-reporting-connector-keystore-cm --from-file=pre-deployment/reportingConnector/keystore/mykeystore.jks

Create directory with <tenant_config_directory> in pre-deployment/reportingConnector/<tenant_config_directory> and place reporting-connector.conf specific to each tenant and set the mysql_dbms_additional_params value as shown below.

CODE
mkdir pre-deployment/reportingConnector/<tenant_config_directory>
CODE
mysql_dbms_additional_params=noDatetimeStringSync=true&useSSL=true&requireSSL=true&trustServerCertificate=true&clientCertificateKeyStoreUrl=file:///root/config/certs/mykeystore.jks&clientCertificateKeyStorePassword={KEYSTORE_PASSWORD}
 
# Replace the {KEYSTORE_PASSWORD} with your original keystore password. Use "changeit" in case of default password.

Reporting Connector Config-Map Setup

For database creation on MTT, refer to the pre-requisite of EF Data Platform

Create the database in target Database Management System using the scripts from pre-deployment/reportingConnector/dbScripts/dbcreation directory. Name of each database will be varied from tenant to tenant.

Update the config present in pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf as per the below mentioned parameters

Parameter

Requirement

fqdn

Use (FQDN) of the CX Solution specific to each tenant.

svc_name

http://ef-cx-historical-reports-svc.expertflow.svc.cluster.local:8081

tenant_id

unique identifier for each tenant

In case of MTT, the tenant_id will be the name of tenant, for on-prem, tenant_id will be expertflow

browser_language

 en-US or ar

connection_type

 mysql or mssql

sql_dbms_server_ip

mysql.ef-mysql.svc.cluster.local

sql_dbms_port

for mysql 3306 / for msql 1433

sql_dbms_username

<username>

sql_dbms_password

<password>

sql_database_name

<database name specific to each tenant>

In case of MTT, Update the following parameters as well

conversation_manager_db_name

<tenant_id>

bot_framework_db_name

<tenant_id>

ccm_db_name

<tenant_id>

routing_engine_db_name

<tenant_id>

cim_customer_db_name

<tenant_id>

business_calendars_db_name

<tenant_id>

state_events_logger_db_name

<tenant_id>

admin_panel_db_name

<tenant_id>

In case of Single tenant deployment, Update the following parameters as well

conversation_manager_db_name

expertflow

bot_framework_db_name

expertflow

ccm_db_name

expertflow

routing_engine_db_name

expertflow

cim_customer_db_name

expertflow

business_calendars_db_name

expertflow

state_events_logger_db_name

expertflow

admin_panel_db_name

expertflow

Apply configuration for Reporting-Connector (For On Prem)

CODE
kubectl -n expertflow create configmap ef-reporting-connector-conf-cm --from-file=pre-deployment/reportingConnector/reporting-connector.conf

Apply configuration for Reporting-Connector on the desired tenant’s namespace

Create a directory for each tenant for MTT

CODE
mkdir -p pre-deployment/reportingConnector/<tenant_config_directory>

Please copy this file in this directory

CODE
cp -r pre-deployment/reportingConnector/reporting-connector.conf  pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf

Edit this file according to your configuration

CODE
vi pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf

Apply this file after updating <tenant_config_directory> and namespace

CODE
kubectl -n <tenant-namespace> create configmap ef-reporting-connector-conf-cm --from-file=pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf

Customise values.yaml

Customise values.yaml

You must first edit values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to edit a new file:

CODE
vi helm-values/cx-reporting-scheduler-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-reporting-scheduler-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/reporting --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Deploy the Reporting Scheduler

CODE
helm upgrade --install --namespace <tenant-namespace> --set global.efCxReleaseName="ef-cx"   cx-reporting --debug --values helm-values/cx-reporting-scheduler-custom-values.yaml  expertflow/reporting  --version 5.0

Expertflow ETL

For ETL deployment, see this guide

CX Eleveo Middleware

Create and Customise cx-middleware-custom-values.yaml

Customise cx-middleware-custom-values.yaml

You must first create custom values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to create a new file:

CODE
vi helm-values/cx-middleware-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-middleware-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/eleveo-middleware --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Create and Customise cx-middleware-cronjob-custom-values.yaml

Customise cx-middleware-cronjob-custom-values.yaml

You must first create custom values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to create a new file:

CODE
vi helm-values/cx-middleware-cronjob-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customization for the CX Helm chart to work.

Optional: Add Further Customizations

You can extend the same cx-middleware-cronjob-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/middleware-cronjob --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Open the helm-values/cx-middleware-custom-values.yaml and helm-values/cx-middleware-cronjob-custom-values.yaml files and update the variables as documented here.

Run the following commands:

CODE
helm upgrade --install --namespace expertflow  --set global.efCxReleaseName="ef-cx"  eleveo-middleware  --values helm-values/cx-middleware-custom-values.yaml expertflow/eleveo-middleware  --version 5.0
helm upgrade --install --namespace expertflow  --set global.efCxReleaseName="ef-cx"  middleware-cronjob --debug --values helm-values/cx-middleware-cronjob-custom-values.yaml   expertflow/middleware-cronjob --version 5.0 

CiscoSyncService

Create and Customise cx-ciscosyncservice-custom-values.yaml

Customise cx-ciscosyncservice-custom-values.yaml

You must first create custom values.yaml file to define your minimum required configurations.

Step 1: Create the Values File

Run the following command to create a new file:

CODE
vi helm-values/cx-ciscosyncservice-custom-values.yaml

Step 2: Add Required Minimum Configuration

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 
siteEnvVars:
    - name: AUTH_SERVER_URL
      value: "https://{{ .Values.global.ingressRouter  }}/auth/"
    - name: EF_SERVER_URL
      value: "https://{{ .Values.global.ingressRouter  }}/unified-admin/"   

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g., devops.example.com.

This is the minimum required customisation for the CX Helm chart to work.

Optional: Add Further Customisations

You can extend the same cx-ciscosyncservice-custom-values.yaml file with additional configurations as needed, including environment variables, replica counts, etc.

To view all available default configurations and decide what you want to override:

CODE
helm show values expertflow/cisco-sync-service --version 5.0

This command prints the full default values.yaml file used by the CX chart, which serves as a reference for all configurable parameters.

We recommend only overriding the values you need in your custom file to keep the configuration lean and maintainable.

Deploy the CiscoSyncService helm chart by

CODE
helm upgrade --install --namespace expertflow  --set global.efCxReleaseName="ef-cx"  cisco-sync-service  --values helm-values/cx-ciscosyncservice-custom-values.yaml expertflow/cisco-sync-service --version 5.0 

Rasa-X deployment

For deployment of Rasa-x AI Assistant, refer to RASA-X Deployment using helm chart .

EFBI Server (Metabase)

It is recommended to not deploy the Metabase on same server where CX is deployed

For deployment on separate server: follow this guide

Login with superadmin@admin.com

EFCX-Bootstrapping (For both On Prem & MTT Deplyment)

Upon successful completion of the CX deployment, follow following guide to perform bootstrapping for the tenant.

Deployments & Configurations for Tenant

This section covers the post-deployment steps needed to configure and initialize each tenant environment within the CX solution.

Step1: Webhooks Registration

First of all, we need to add webhooks information in the mongo database for the components which required bootstrapping upon tenant registration.

  • Export mongo certs using

    CODE
    mkdir /tmp/mongodb_certs
    CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
    for f in ${CERTFILES[*]}; do   kubectl get secret mongo-mongodb-ca  -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v  | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done
  • Go inside Kubernetes directory and execute command to import data inside webhook collections. Following are commands

    CODE
    cd CX-5.0/kubernetes
    kubectl -n ef-external run mongo-tools --image=mongo:6.0 --restart=Never -- sleep 3600
    kubectl -n ef-external cp ./post-deployment/cim-tenant.webhooks.json mongo-tools:/tmp/cim-tenant.webhooks.json
    kubectl -n ef-external cp /tmp/mongodb_certs/mongodb-ca-cert mongo-tools:/tmp/mongodb-ca-cert
    kubectl -n ef-external cp /tmp/mongodb_certs/client-pem mongo-tools:/tmp/combined.pem
    kubectl -n ef-external exec mongo-tools -- \
      mongoimport \
      --host mongo-mongodb.ef-external.svc.cluster.local \
      --port 27017 \
      --db cim-tenant \
      --collection webhooks \
      --file /tmp/cim-tenant.webhooks.json \
      --jsonArray \
      --ssl \
      --sslCAFile /tmp/mongodb-ca-cert \
      --sslPEMKeyFile /tmp/combined.pem \
      --username root \
      --password Expertflow123 \
      --authenticationDatabase admin
    kubectl -n ef-external delete pod mongo-tools

🧪 New Environment Variables (efConnectionVars)

Following are the env variables we need to manage based on single tenant (on prem) or multi-tenant solution. These vars are available in connection vars of all the components eg (agent-desk, campagins, core, amq)

CODE
ENABLE_CLOUD_MANAGED_CONNECTIONS: "false"     # true if testing with managed cloud DBs
CX_TENANT_URL: "http://ef-cx-cx-tenant-svc:3000" # reference to the cx tenant component
ROOT_DOMAIN: "expertflow.com"                 # Root domain of the multitenant provider, and "NIL" in case of on prem single tenant

Also make sure in CX-Tenant component, value of the var FS_URL is properly configured, else dynamic domain won’t be registered.

CODE
FS_URL: http://20.123.60.36:8000/add-domain
  • If the incoming FQDN matches the ROOT_DOMAIN, the solution operates in multitenant mode eg tenant1.expertflow.com here root domain is matching with fqdn domain.

  • If it does not match, the solution defaults to on-premises mode, using the default tenant ID: expertflow.

  • Also for onprem deployment, name of the domain must expertflow as domainName = tenantId

Tenant Onboarding

This section provides the required steps and references for onboarding tenants after completing the CX deployment.

Configurations

  1. Keycloak Configuration Guide (Start from Step 14)

  2. Conversation-Studio configuration guide

  3. Run Expertflow ETL pipelines mentioned here

  4. For customer channel configuration, see customer channels.

  5. For CX-Voice deployment configurations, use this guide.

  6. For the Campaigns Keycloak Configuration Guide

  7. To add a new tenant after deployment, refer to following guide

  8. API Authentication and Authorization Configuration Guide.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.