Deployment Guide
This guide details how to deploy Expertflow CX on Kubernetes using Helm. It covers prerequisites (Kubernetes setup, namespaces, FQDNs, TLS/SSL), and provides step-by-step instructions for deploying all CX components and required external services (MongoDB, Redis, PostgreSQL, MinIO, Keycloak, Vault) via Helm charts. The guide also explains multi-tenancy setup, component configuration, and post-deployment checks to ensure a stable environment for both on-premises and cloud deployments.
Prerequisites
The following table describes the prerequisites for using Helm for deployment.
Item | Description | When changed |
|---|---|---|
Kubernetes Setup | a standard and compatible release of kubernetes is available | A clean installed Kubernetes Engine . This can be performed using guide RKE2 Control plane Deployment |
Wildcard Domain (FQDN) | A valid FQDN is required to deploy the solution for example “devops.ef.com” | by default there is no FQDN associated, and the helm chart(s) will exit with failure if the default value is used. |
External Components | All external components have their own helm charts available. | if you are using externally managed components such as mongo, minio, redis and postgresql, relevant values should be updated in the helm chart values. details are provided below. |
TLS/SSL certificate | It is mandatory to have a valid SSL certificate already create in both expertflow and ef-external namespaces. Default Ingress certificate name is “ | It is by default required and must be created before the actual helm chart deployment. Update when certificate is renewed or replaced by IT. |
Custom configurations | All components requiring custom changes should be updated in their respective values file | Mandatory, and upgrade of the helm chart is required when updated. |
ingress controller | by default both resident and ef-cx helm charts are using nginx as ingress controller | if using other ingress controller for example nginx or traefik, update all the relevant tags and annotations to reflect appropriate values. details |
EF CX Helm Chart
Global Chart Details
In addition to sub-chart details, below given are the details for this meta chart. Any key: value pair present in this file supersedes the values file in sub-chart's values file.
Section | Item | Details | default |
|---|---|---|---|
global | ingressRouter | Wildcard FQDN used for the EF-CX Solution | “*.expertflow.com” |
imageRegistry | default container registry to pull images from | ||
ingressCertName | default ingress certificate secret name. must be created before install | "ef-ingress-tls-secret" | |
ingressClassName | ingress class name | “nginx” | |
commonIngressAnnotations | common annotations for all ingress resources | ““ | |
efCommonVars_IS_WRAP_UP_ENABLED | Common Environment Variable | true | |
efCommonVars_WRAPUP_TIME | Common Environment Variable | "60" | |
efCommonVars_DEFAULT_ROOM_NAME | Common Environment Variable | CC | |
efCommonVars_DEFAULT_ROOM_LABEL | Common Environment Variable | CC | |
efCommonVars_DEFAULT_ROOM_DESCRIPTION | Common Environment Variable | Contact Center Room | |
efCommonVars_CONVERSATION_SEARCH_WINDOW_HRS | Common Environment Variable | "24" | |
efCommonVars_TZ | Common Environment Variable | UTC | |
efCommonVars_MASK_ATTRIBUTES_PATH | Common Environment Variable | /sensitive.js | |
efCommonVars_LOGGING_CONFIG | Common Environment Variable | /logback/logback-spring.xml | |
efCommonVars_ROOM_IS_USER_ID | Common Environment Variable | false | |
clusterDomain |
| root domain for the cluster DNS | “cluster.local” |
imageCredentials | registry | container image registry, must be same as global.imageRegistry | |
username | username for the registry | efcx | |
password | password for the user of the registry | RecRpsuH34yqp56YRFUb | |
email address for the registry config | |||
efConnectionVars |
| Contains list of all the sub-charts related connection parameters | list of parameters. |
sub-chart |
|
|
|
enabled | enable of disable a sub-chart deployment. true | false | true |
Image Pull Secret is created at runtime based on these variables, a valid dockerconfig in JSON format is created at runtime and added to the kubernetes engine as secret with the name of ef-gitlab-secret
All sub-charts are named after the component name for which it is developed and its values are evaluated from meta chart’s values.yaml file
Sub-Chart Details
All sub-charts have below given details available.
Add helm repository
helm repo add expertflow https://expertflow.github.io/charts/
update helm repo
helm repo update expertflow
Helm chart functional groups
CX helm charts are divided into functional groups.
Group | Description | Dependency |
|---|---|---|
CX | serves the basic and core functionality of the CX Solution. | External Components |
Web Channels | Provides CX enhadncements for digital Channels | CX |
AgentDesk | Provides separate deployment for customers where AgentDesk is optional | CX |
Campaigns | Functional group provides Campaigns Collaborations. | CX |
Reporting | Reporting related to the CX | CX |
Eleveo | Eleveo functional group | CX |
CiscoScheduler | Cisco functional group | CX |
mtt-single | Functional group to host all non-MTT components that are deployed separately for each tenant instance. | CX |
Metabase | Reporting | CX |
Prepare for CX Deployment
Step 1: Clone the Expertflow CX repository
git clone -b CX-5.0 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git CX-5.0
cd CX-5.0/kubernetes/
Step 2: Create Namespaces
Create a namespace
expertflowfor all Expertflow components
Run the following command on the control-plane node.
kubectl create namespace expertflow
Create a namespace ef-external for all the external elements of the Expertflow CX solution such as Mongo, Redis, MinIO, etc.
Run the following command on the control-plane node.
kubectl create namespace ef-external
Ingress Controller Selection
Default ingressClass is set to “nginx” in all helm charts' global section. if you prefer to use other ingress controller, please update the ingressClassName to appropriate value.
All helm charts served at expertflow helm repository ( CX groups/components and external components ) by default are compatible with ingress-nginx ingress controller using ingress-nginx annotations. Should there be requirement for any other ingress controller like traefik, HA-Proxy or contour etc, please adjust the annotations for all components accordingly. A coordinated guide for using Traefik as Ingress Controller is available for CX solution’s compatibility.
Add TLS Certificates
For Self Signed please use this guide in both
ef-externalandexpertflownamespaces (Use for lab VMs)(Only For Multitenancy)
For multi-tenancy deployments, a wildcard SSL certificate is required (e.g.,*.expertflow.com).
The certificate (server.crt) and private key (server.key) will be provided by the IT department.You must create a Kubernetes secret with these files in both namespaces (expertflow & ef-external)
The default secret name must be
ef-ingress-tls-secretCODEkubectl -n expertflow create secret tls ef-ingress-tls-secret --key server.key --cert server.crt kubectl -n ef-external create secret tls ef-ingress-tls-secret --key server.key --cert server.crtFor Commercial Certificates, please import them as
tls.crtandtls.keyand create secret with the name ofef-ingress-tls-secretin bothef-externalandexpertflownamespacesFor 21t Lets Encrypt SSL for EFCX (Use for any VM other than lab i.e AWS, Contabo, etc.)
NOTE: When using LE based TLS Certificates, you will have to enable correct annotations in all the relevant values file.
sed -i -e 's/#cert-manager.io\/cluster-issuer: /cert-manager.io\/cluster-issuer: /g' helm/keycloak/values.yaml to enable it.
sed -i -e 's/#cert-manager.io\/cluster-issuer: /cert-manager.io\/cluster-issuer: /g' helm/apisix/values.yaml to enable it
This procedure is required for both externals and all CX group charts being deployed.
Step 3: Apply Image Pull secret
Run the following commands for applying ImagePullSecrets of Expertflow CX images.
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-external.yaml
Create a directory to hold values files for all the helm charts.
mkdir helm-values
Custom Password Interpolations
Below are the interpolations when using custom or not-default password for mongodb, minio, redis, postgresql and activeMQ
Component with custom password | update required in |
|---|---|
MongoDB |
|
PostgreSQL |
|
minio |
|
Redis (ACL enabled) |
|
keycloak | N/A |
activeMQ | N/A |
Setup SQL Database
Expertflow CX requires any of the following PostgreSQL for Expertflow CX deployment for storing configuration data.
If you are deploying external components with provided TLS certificates, you must run the following command before deployment
kubectl apply -f pre-deployment/static-tls
PostgreSQL RECOMMENDED
| If you do not have PostgreSQL in your environment, create Config-Map of PostgreSQL to create necessary databases and preload it with bootstrap configurations.
CODE
download the values.yaml file locally to customize the parameter values.
CODE
Update the following values file
CODE
For Worker HA deployment, add the following tolerations:-
CODE
Deploy the postgresql
BASH
For managed Postgresql, see this guide for configuring PostgreSQL for Expertflow CX. |
|---|
Deploy CX External Components
Expertflow CX requires the following 3rd party components.
Cache - Redis(ACL enabled) | Key-Values based Caching engine, used by most of the EF-CX components. |
|---|---|
MongoDB | NoSQL Database, maintains and serves as primary back store for EF-CX solution. |
MinIO | S3 compliant object storage. |
IAM (Keycloak) | Realm based auth management tool. |
You may use them from your existing environment or from a cloud provider .
Setup IAM (Keycloak)
Prerequisites
Before proceeding with the keycloak deployment, please update the backend database connection string parameters ( when using non-default passwords )
clone the values file and update the parameter values
helm show values expertflow/keycloak --version 5.0 > helm-values/ef-keycloak-custom-values.yaml
edit helm-values/ef-keycloak-custom-values.yaml and update the password for postgresql database
global:
ingressRouter: <DEFAULT-FQDN>
externalDatabase:
password: "Expertflow123"
Default Keycloak deployment uses postgresql running inside the same kubernetes cluster. When using managed postgresql database instance, update above parameters with relevant information
For Worker HA deployments, add the following tolerations:-
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being not-ready
IAM (Keycloak) Deployment
IAM (Keycloak) is used as the centralized authentication and authorization component for Expertflow CX. Follow these steps to setup KeyCloak.
Now, deploy Keycloak by running the following command
helm upgrade --install=true --debug --namespace=ef-external --values=helm-values/ef-keycloak-custom-values.yaml keycloak expertflow/keycloak --version 5.0
Check the Keycloak installation status. You can check the status of deployment by using the following command:
kubectl -n ef-external rollout status sts keycloak
Setup MongoDB
Expertflow CX uses MongoDB for storing all CX events, activities, and some configuration data as well.
Skip this step if you already have MongoDB in your environment that can be used by Expertflow CX. For using MongoDB from a managed environment, see this guide for necessary configurations.
Clone the values file to update the parameter values
helm show values expertflow/mongodb --version 5.0 > helm-values/ef-mongodb-custom-values.yaml
Update the following values file helm-values/ef-mongodb-custom-values.yaml as mentioned below
auth:
rootPassword: "Expertflow123"
For Worker HA deployments, add the following tolerations:-
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being not-ready
Deploy MongoDB by running the following command.
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-mongodb-custom-values.yaml mongo expertflow/mongodb --version 5.0
Check the MongoDB deployment status by running the following command:
kubectl -n ef-external rollout status sts mongo-mongodb
Setup MinIO as S3 Storage
Expertflow CX uses MinIO for storing files exchanged between agents, customers, and/or bots. Install using Helm using following command:
Clone the values file for updating the parameter values
helm show values expertflow/minio --version 5.0 > helm-values/ef-minio-custom-values.yaml
Update the minio helm chart helm-values/ef-minio-custom-values.yaml files with the required ACCESSKEY and PASSKEY values
auth:
rootUser: minioadmin
rootPassword: "minioadmin"
Deploy the minio helm chart
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-minio-custom-values.yaml minio expertflow/minio --version 5.0
Wait for the minio deployment to get ready
kubectl -n ef-external rollout status deployment minio --timeout=5m
Digital Channel Icons Bootstrapping
Proceed with icons bootstrapping.
kubectl apply -f scripts/minio-helper.yaml
kubectl -n ef-external --timeout=90s wait --for=condition=ready pod minio-helper
kubectl -n ef-external cp post-deployment/data/minio/bucket/default minio-helper:/tmp/
kubectl -n ef-external cp scripts/icon-helper.sh minio-helper:/tmp/
kubectl -n ef-external exec -it minio-helper -- /bin/sh /tmp/icon-helper.sh
kubectl delete -f scripts/minio-helper.yaml
Setup Redis
CX uses Redis for storing active system state of most of the CX objects. Redis is deployed with Access Control Lists (ACLs) to manage multiple users and credentials securely.
Clone the values file to update the parameter values
helm show values expertflow/redis --version 5.0 > helm-values/ef-redis-custom-values.yaml
Update the following values helm-values/ef-redis-custom-values.yaml as mentioned below:-
auth:
password: "Expertflow123" # Change this to match the requirements
For Worker HA deployments, add the following tolerations:-
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being unreachable
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 60 # Evict after 60 seconds of being not-ready
Create Redis ACL Secret
kubectl -n ef-external create secret generic ef-redis-acl-secret --from-literal=superuser=Expertflow464
Run the following command to deploy Redis.
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-redis-custom-values.yaml redis expertflow/redis --version 5.0
Setup Application Gateway (APISIX)
Clone the apisix values.yaml file
helm show values expertflow/apisix --version 5.0 > helm-values/apisix-custom-values.yaml
update the apisix-custom-values.yaml file for given parameters
global:
ingressRouter: "*.expertflow.com" # * for MTT & for prem replace FQDN
ingressClassName: "nginx"
ingressTlsCertName: "ef-ingress-tls-secret"
Deploy the apisix using updated custom-values.yaml file
helm upgrade --install --namespace ef-external --values helm-values/apisix-custom-values.yaml apisix expertflow/apisix --version 5.0
Verify the deployment of the apisix
kubectl -n ef-external get deploy
For MTT Setup Nginx Router for Multi deployments Routing (Non-MTT Components)
This file typically includes: (Service, Configmap & Deployment)
Run the following command to deploy the tenant router:
kubectl -n expertflow apply -f pre-deployment/nginx-router/nginx-router-manifests.yaml
Setup CX Bus (ActiveMQ)
Clone the values file to update the parameters required
helm show values expertflow/activemq --version 5.0 > helm-values/ef-activemq-custom-values.yaml
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-activemq-custom-values.yaml activemq expertflow/activemq --version 5.0
CX Clamav
Clamav is an optional scanning service to scan the files before uploading to the file engine. You can enable/disable the scanning in the file engine’s environment variable IS_SCAN_ENABLED; by default, it’s enabled.
Customise the deployment by fetching the values.yaml file and edit it as per the requirements.
helm show values expertflow/clamav --version 5.0 > helm-values/cx-clamav-values.yaml
Edit/update the values file helm-values/cx-clamav-values.yaml with
global:
ingressRouter: <DEFAULT-FQDN>
Deploy the Clamav helm chart by
helm upgrade --install --namespace ef-external --set global.efCxReleaseName="ef-cx" clamav --debug --values helm-values/cx-clamav-values.yaml helm/clamav --version 5.0
Setup Vault
Copy mongo-mongodb-ca from ef-external to vault namespace
k create namespace vault
kubectl get secret mongo-mongodb-ca -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: vault/' | kubectl create -f -
Customise values.yaml
Use the following vault configuration guide
Use the following vault dynamic database configuration guide
Deploy CX Components
Custom Configuration
For detailed guidelines on applying environment-specific configurations using custom values.yaml layering, refer to the CX Helm Chart Custom Configuration Strategy guide.
SSl/TLS Import in Namespaces
Transfer the Mongo, Redis, PostgreSQL and ActiveMQ Certificates from the ef-external namespace
kubectl get secret mongo-mongodb-ca -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret redis-crt -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret ef-postgresql-crt -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
kubectl get secret activemq-tls -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: expertflow/' | kubectl create -f -
CX Core
Setup default translation file for customer widget
kubectl -n expertflow create configmap ef-widget-translations-cm --from-file=pre-deployment/app-translations/customer-widget/i18n/
Apply ConfigMap to enable log masking for all components in expertflow namespace:-
kubectl apply -f pre-deployment/logback/
kubectl -n expertflow create configmap ef-logback-cm --from-file=pre-deployment/logback/logback-spring.xml
Setup graphql schemas and mongodb rules configmaps
kubectl create configmap -n expertflow conversation-manager-graphql-schemas --from-file=pre-deployment/conversation-manager/graphql/schemas
kubectl create configmap -n expertflow conversation-manager-graphql-mongodb-rules --from-file=pre-deployment/conversation-manager/graphql/graphql-mongodb-rules.json
kubectl create configmap -n expertflow routing-engine-graphql-schemas --from-file=./pre-deployment/routing-engine/graphql/schemas
kubectl create configmap -n expertflow routing-engine-graphql-memory-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-memory-rules.json
kubectl create configmap -n expertflow routing-engine-graphql-redis-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-redis-rules.json
Create and Customise ef-cx-custom-values.yaml
For Single tenant, update the following configuration in ef-cx-custom-values.yaml
global:
ingressRouter: <CUSTOM-FQDN>In EF-Connection Vars you need to update the following vars as per your valid domain & DBs
CODEROOT_DOMAIN: "<TenantID>" // your tenantId ENABLE_CLOUD_MANAGED_CONNECTIONS: "false" // if using managed services, set to true MONGODB_URI_PREFIX: "mongodb" // set to mongodb+srv if using DNS Seedlist to manage replicasIn real time reports change the following extraEnvVars as per your reporting DB
DATASOURCE_URL
DATASOURCE_USERNAME
DATASOURCE_PASSWORD
Below variables are added in cx-tenant service
CODE#Modify below values according to the cloud instance - name: ENCRYPTION_KEY value: "6f3a2f95b7c0e4b1f37c0a9df8b68d7ea7d5bfbf41e2d88b3b9b55f4a6d1c2f3" - name: AZURE_STORAGE_ACCOUNT value: "efcxblobstorage" - name: AZURE_STORAGE_KEY value: "R/RYZ+knJWlYKr6fzWSLRSoauiY7/62K1n7kZ80d0zWPYqaZabokDCbJjFMgL20YhYYmGD4LxDre+AStFhKsqA==" - name: FS_URL value: "http://20.123.60.36:8000/add-domain" //Replace values with voice
For Multi-tenant, update the following configuration in ef-cx-custom-values.yaml
global:
ingressRouter: <*.expertflow.com>In EF-Connection Vars you need to update the following vars as per you valid domain & DBs
CODEROOT_DOMAIN: "expertflow.com" // your root domain ENABLE_CLOUD_MANAGED_CONNECTIONS: "false" MONGODB_URI_PREFIX: "mongodb"In real time reports change the following extraEnvVars as per your reporting DB
DATASOURCE_URL
DATASOURCE_USERNAME
DATASOURCE_PASSWORD
Below variables are added in cx-tenant service
CODE#Modify below values according to the cloud instance - name: ENCRYPTION_KEY value: "6f3a2f95b7c0e4b1f37c0a9df8b68d7ea7d5bfbf41e2d88b3b9b55f4a6d1c2f3" - name: AZURE_STORAGE_ACCOUNT value: "efcxblobstorage" - name: AZURE_STORAGE_KEY value: "R/RYZ+knJWlYKr6fzWSLRSoauiY7/62K1n7kZ80d0zWPYqaZabokDCbJjFMgL20YhYYmGD4LxDre+AStFhKsqA==" - name: FS_URL value: "http://20.123.60.36:8000/add-domain" //Replace values with voice
For MTT change the CONTROLLER_URL in conversation manager
- name: CONTROLLER_URL
value: "http://tenantId-conversation-studio-svc.tenantId.svc:1880"
For MTT Disable the flag of conversation studio from ef-cx-custom file
conversation-studio:
enabled: false
Deploy the CX Core using default values.
helm upgrade --install --namespace expertflow --create-namespace ef-cx --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 5.0
“ef-cx” in above command is the release name which will referenced in all subsequent functional groups' deployments.
check the status of CX components
kubectl -n expertflow get pods
Copy the icons to the cx-tenant pod once deployed in the icons directory to persist
kubectl -n expertflow cp post-deployment/data/minio/bucket/default <cx-tenant-pod-name>:/icons
CX Agent Desk
Setup default translation file for Agent Desk
kubectl -n expertflow create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n
Setup default canned messages translations file for Agent Desk
kubectl -n expertflow create configmap ef-canned-messages-cm --from-file=pre-deployment/app-translations/unified-agent/canned-messages
Apply CRM ConfigMap for Agent Desk
kubectl -n expertflow create configmap ef-crm-service-cm --from-file=pre-deployment/crm-service/
Note: For Singletenant
Update the FQDN of the machine against url in supervisor_dashboard_cim_json_api field in file post-deployment/config/grafana/supervisor-dashboards/datasource.yml
############################################ JSON API CONFIGURATION ##########################################
- name: supervisor_dashboard_cim_json_api
url : https://devops234.ef.com ## Update with the FQDN of the machine
############################################ INFINITY API PLUGIN CONFIGURATION ##########################################
- name: infinity_cim_json_api
jsonData:
allowedHosts: #Add the FQDN of the registered Tenants
- "https://example1.com"
Note: For Multi-tenant
Update the FQDN of
CX_TENANT_URLagainsturlfield undersupervisor_dashboard_cim_json_apiin filepost-deployment/config/grafana/supervisor-dashboards/datasource.yml.Add the FQDN of all tenants against
allowedHostsfield underinfinity_cim_json_apiin filepost-deployment/config/grafana/supervisor-dashboards/datasource.yml
############################################ JSON API CONFIGURATION ##########################################
- name: supervisor_dashboard_cim_json_api
url : https://devops234.ef.com ##Update with the CX_TENANT_URL FQDN. Note: Don't use svc name
############################################ INFINITY API PLUGIN CONFIGURATION ##########################################
- name: infinity_cim_json_api
jsonData:
allowedHosts: #Add the FQDN of all the registered Tenants
- "*"
- "https://example1.com"
- "https://example2.com"
- "https://example3.com"
- "https://example4.com"
Apply the Grafana data-source manifest.
kubectl -n expertflow create secret generic ef-grafana-datasource-secret --from-file=post-deployment/config/grafana/supervisor-dashboards/datasource.yml
Apply Grafana provider manifest.
kubectl -n expertflow create cm ef-grafana-dashboard-provider-cm --from-file=post-deployment/config/grafana/supervisor-dashboards/dashboard.yml
Apply Config-map for the dashboards files using the steps below.
###### SUPERVISOR DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-supervisor-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM.json
###### AGENT DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM.json
###### AGENT TEAMS DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-teams-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_teams_dashboard.json
###### AGENT PERFORMANCE DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-agent-performance-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_performance_dashboard.json
###### SOCIAL MEDIA PERFORMANCE DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-social-media-performance-trend-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/social_media_performance_trend_dashboard.json
###### TEAM STATISTICS DASHBOARD ######
kubectl -n expertflow create configmap ef-grafana-team-statistics-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/team_statistics_dashboard.json
Install the Agent Desk using helm chart
Customise values.yaml
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-agent-desk --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 5.0
CX Channels
Customise values.yaml
Deploy the Channels helm chart by
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" --debug cx-channels --values helm-values/cx-channels-custom-values.yaml expertflow/channels --version 5.0
CX Campaigns
Customise values.yaml
Deploy the CX Campaigns helm chart by
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 5.0
Make sure to assign the role conversation-studio-admin to the Keycloak user admin.
If you want to create an explicit user for campaigns, update the user in the campaigns siteEnvVars
For MTT Setup Non-MTT Components (Per Tenant)
To deploy CX Campaigns Studio, Conversation Studio, and QM for a tenant, use the mtt-single Helm chart.
For MTT, you have to disable Campaigns Studio, Conversation Studio & QM-Backend from existing charts or custom values, by first setting the enabled key to false for these components in their respective charts.
enabled : false
For QM Backend, we need to manually create the PostgreSQL database first. The steps to create the database are mentioned in this guide.
First you need to create the namespace for new tenant
kubectl create namespace <tenant-name>
For MTT, you have to transfer the Mongo, Redis, PostgreSQL Certificates from the ef-external namespace to newly created tenant namespace.
please change <namespace> with the specific tenant namespace.
kubectl get secret mongo-mongodb-ca -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get secret redis-crt -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get secret ef-postgresql-crt -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: <namespace>/' | kubectl create -f -
kubectl get configmap ef-logback-cm -n expertflow -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -
kubectl get configmap ef-cx-efconnections-cm -n expertflow -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -
kubectl get secret ef-gitlab-secret -n expertflow -o yaml | sed 's/namespace: expertflow/namespace: <namespace>/' | kubectl create -f -
Customise values.yaml
Update namespace and apply MTT-single helm chart
helm upgrade --install --namespace <tenant-ns> --debug <tenant-id> --values helm-values/mtt-single-custom-values.yaml expertflow/MTT-single --version 5.0
CX Reporting
Configure TLS connection for MySQL
For MTT, each tenant has their dedicated namespace where the respective commands will be deployed. For on-prem, the <tenant-namespace> will be expertflow
Get the MySQL key-store (.jsk) & certificate(.cert) files for mysql. The .jsk file is required for configuration of the reporting connector, whereas the .cert file is required for Apache Superset SSL configuration. Skeleton Project (cim-solution) already contains the default .jks files in the keystore directory. Replace the mykeystore.jks file acquired with the actual file in cim-solution/kubernetes/pre-deployment/reportingConnector/keystore/ directory.
Create keystore.jks used for MySQL TLS
kubectl create configmap -n <tenant-namespace> ef-reporting-connector-keystore-cm --from-file=pre-deployment/reportingConnector/keystore/mykeystore.jks
Create directory with <tenant_config_directory> in pre-deployment/reportingConnector/<tenant_config_directory> and place reporting-connector.conf specific to each tenant and set the mysql_dbms_additional_params value as shown below.
mkdir pre-deployment/reportingConnector/<tenant_config_directory>
mysql_dbms_additional_params=noDatetimeStringSync=true&useSSL=true&requireSSL=true&trustServerCertificate=true&clientCertificateKeyStoreUrl=file:///root/config/certs/mykeystore.jks&clientCertificateKeyStorePassword={KEYSTORE_PASSWORD}
# Replace the {KEYSTORE_PASSWORD} with your original keystore password. Use "changeit" in case of default password.
Reporting Connector Config-Map Setup
For database creation on MTT, refer to the pre-requisite of EF Data Platform
Create the database in target Database Management System using the scripts from pre-deployment/reportingConnector/dbScripts/dbcreation directory. Name of each database will be varied from tenant to tenant.
Update the config present in pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf as per the below mentioned parameters
Parameter | Requirement |
|---|---|
fqdn | Use (FQDN) of the CX Solution specific to each tenant. |
svc_name | http://ef-cx-historical-reports-svc.expertflow.svc.cluster.local:8081 |
tenant_id | unique identifier for each tenant In case of MTT, the tenant_id will be the name of tenant, for on-prem, tenant_id will be expertflow |
browser_language | en-US or ar |
connection_type | mysql or mssql |
sql_dbms_server_ip | mysql.ef-mysql.svc.cluster.local |
sql_dbms_port | for mysql 3306 / for msql 1433 |
sql_dbms_username | <username> |
sql_dbms_password | <password> |
sql_database_name | <database name specific to each tenant> |
In case of MTT, Update the following parameters as well | |
conversation_manager_db_name | <tenant_id> |
bot_framework_db_name | <tenant_id> |
ccm_db_name | <tenant_id> |
routing_engine_db_name | <tenant_id> |
cim_customer_db_name | <tenant_id> |
business_calendars_db_name | <tenant_id> |
state_events_logger_db_name | <tenant_id> |
admin_panel_db_name | <tenant_id> |
In case of Single tenant deployment, Update the following parameters as well | |
conversation_manager_db_name | expertflow |
bot_framework_db_name | expertflow |
ccm_db_name | expertflow |
routing_engine_db_name | expertflow |
cim_customer_db_name | expertflow |
business_calendars_db_name | expertflow |
state_events_logger_db_name | expertflow |
admin_panel_db_name | expertflow |
Apply configuration for Reporting-Connector (For On Prem)
kubectl -n expertflow create configmap ef-reporting-connector-conf-cm --from-file=pre-deployment/reportingConnector/reporting-connector.conf
Apply configuration for Reporting-Connector on the desired tenant’s namespace
Create a directory for each tenant for MTT
mkdir -p pre-deployment/reportingConnector/<tenant_config_directory>
Please copy this file in this directory
cp -r pre-deployment/reportingConnector/reporting-connector.conf pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf
Edit this file according to your configuration
vi pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf
Apply this file after updating <tenant_config_directory> and namespace
kubectl -n <tenant-namespace> create configmap ef-reporting-connector-conf-cm --from-file=pre-deployment/reportingConnector/<tenant_config_directory>/reporting-connector.conf
Customise values.yaml
Deploy the Reporting Scheduler
helm upgrade --install --namespace <tenant-namespace> --set global.efCxReleaseName="ef-cx" cx-reporting --debug --values helm-values/cx-reporting-scheduler-custom-values.yaml expertflow/reporting --version 5.0
Expertflow ETL
For ETL deployment, see this guide
CX Eleveo Middleware
Create and Customise cx-middleware-custom-values.yaml
Create and Customise cx-middleware-cronjob-custom-values.yaml
Open the helm-values/cx-middleware-custom-values.yaml and helm-values/cx-middleware-cronjob-custom-values.yaml files and update the variables as documented here.
Run the following commands:
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" eleveo-middleware --values helm-values/cx-middleware-custom-values.yaml expertflow/eleveo-middleware --version 5.0
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" middleware-cronjob --debug --values helm-values/cx-middleware-cronjob-custom-values.yaml expertflow/middleware-cronjob --version 5.0
CiscoSyncService
Create and Customise cx-ciscosyncservice-custom-values.yaml
Deploy the CiscoSyncService helm chart by
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cisco-sync-service --values helm-values/cx-ciscosyncservice-custom-values.yaml expertflow/cisco-sync-service --version 5.0
Rasa-X deployment
For deployment of Rasa-x AI Assistant, refer to RASA-X Deployment using helm chart .
EFBI Server (Metabase)
It is recommended to not deploy the Metabase on same server where CX is deployed
For deployment on separate server: follow this guide
Login with superadmin@admin.com
EFCX-Bootstrapping (For both On Prem & MTT Deplyment)
Upon successful completion of the CX deployment, follow following guide to perform bootstrapping for the tenant.
Deployments & Configurations for Tenant
This section covers the post-deployment steps needed to configure and initialize each tenant environment within the CX solution.
Step1: Webhooks Registration
First of all, we need to add webhooks information in the mongo database for the components which required bootstrapping upon tenant registration.
Export mongo certs using
CODEmkdir /tmp/mongodb_certs CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}')) for f in ${CERTFILES[*]}; do kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; doneGo inside Kubernetes directory and execute command to import data inside webhook collections. Following are commands
CODEcd CX-5.0/kubernetes kubectl -n ef-external run mongo-tools --image=mongo:6.0 --restart=Never -- sleep 3600 kubectl -n ef-external cp ./post-deployment/cim-tenant.webhooks.json mongo-tools:/tmp/cim-tenant.webhooks.json kubectl -n ef-external cp /tmp/mongodb_certs/mongodb-ca-cert mongo-tools:/tmp/mongodb-ca-cert kubectl -n ef-external cp /tmp/mongodb_certs/client-pem mongo-tools:/tmp/combined.pem kubectl -n ef-external exec mongo-tools -- \ mongoimport \ --host mongo-mongodb.ef-external.svc.cluster.local \ --port 27017 \ --db cim-tenant \ --collection webhooks \ --file /tmp/cim-tenant.webhooks.json \ --jsonArray \ --ssl \ --sslCAFile /tmp/mongodb-ca-cert \ --sslPEMKeyFile /tmp/combined.pem \ --username root \ --password Expertflow123 \ --authenticationDatabase admin kubectl -n ef-external delete pod mongo-tools
🧪 New Environment Variables (efConnectionVars)
Following are the env variables we need to manage based on single tenant (on prem) or multi-tenant solution. These vars are available in connection vars of all the components eg (agent-desk, campagins, core, amq)
ENABLE_CLOUD_MANAGED_CONNECTIONS: "false" # true if testing with managed cloud DBs
CX_TENANT_URL: "http://ef-cx-cx-tenant-svc:3000" # reference to the cx tenant component
ROOT_DOMAIN: "expertflow.com" # Root domain of the multitenant provider, and "NIL" in case of on prem single tenant
Also make sure in CX-Tenant component, value of the var FS_URL is properly configured, else dynamic domain won’t be registered.
FS_URL: http://20.123.60.36:8000/add-domain
If the incoming FQDN matches the
ROOT_DOMAIN, the solution operates in multitenant mode eg tenant1.expertflow.com here root domain is matching with fqdn domain.If it does not match, the solution defaults to on-premises mode, using the default tenant ID:
expertflow.Also for onprem deployment, name of the domain must
expertflowas domainName = tenantId
Tenant Onboarding
This section provides the required steps and references for onboarding tenants after completing the CX deployment.
Configurations
Keycloak Configuration Guide (Start from Step 14)
Conversation-Studio configuration guide
Run Expertflow ETL pipelines mentioned here
For customer channel configuration, see customer channels.
For CX-Voice deployment configurations, use this guide.
For the Campaigns Keycloak Configuration Guide
To add a new tenant after deployment, refer to following guide