Skip to main content
Skip table of contents

Expertflow CX Deployment on Kubernetes

Introduction

This document illustrates the procedure and steps to deploy Expertflow CX Solution on any Kubernetes distribution (K3s or RKE2).

Intended Audience

This document is intended for IT operations personnel and system administrators with an understanding of the Kubernetes and its particulars like Pods, Deployments, StatefullSets and Services. A more detailed information on these topics is already available on the Kubernetes website and can be used as reference point to start deployment of CIM solution.

A detailed listing of the components used and related information is already plotted in a table at this link giving an exact listing of services and components with all possible details added.

All the commands reference the deployment of solution for the example domain of FQDN. Change FQDN to the domain name appropriately.

Pre-requisites for CIM Deployment

Before proceeding with Expertflow CX Deployment, please make sure that you have completed the Kubernetes Deployment as per your need. The options for deploying Kubernetes are listed below:

Expertflow CX Internal Components

Step 1: Clone the Experflow CX Repository

1. Start with cloning the repository from GitLab.

BASH
git clone -b <branch-name> https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git


Modify the <release-branch> with your desired release branch name.

2. Change to the directory.

BASH
cd cim-solution/kubernetes

Step 2: Install Rancher OPTIONAL STEP

Rancher is web-UI for managing Kubernetes clusters.

1. To deploy the Rancher Web-UI, add the Helm repository.

2. Install the cert-manager required for the Rancher. 

After installation, wait for al-least 30 seconds for cert-manager to start

BASH
helm upgrade --install=true cert-manager \
--wait=true \
--timeout=10m0s \
 --debug \
--namespace cert-manager \
--create-namespace \
--version v1.10.0 \
--values=external/cert-manager/values.yaml \
external/cert-manager

3. Use the following command to see if all cert-manager pods are up and running.

BASH
kubectl get pods -n cert-manager

4. Deploy the rancher using Helm Chart.

BASH
helm upgrade --install=true --wait=true   --timeout=10m0s  --debug  rancher --namespace cattle-system --create-namespace --values=external/rancher/values.yaml  external/rancher

5. Rancher is by default not accessible outside the cluster. To make it accessible, change the service type from Cluster-IP to NodePort:

BASH
kubectl -n cattle-system patch svc rancher  -p '{"spec": {"type": "NodePort"}}'

6. Get the Rancher Service port by using the following command:

BASH
kubectl -n cattle-system  get svc rancher -o go-template='{{(index .spec.ports 1).nodePort}}';echo;

7. Now you can access the Rancher Web UI. It will be accessible at any-node-ip-of-cluster:PORT-from-above-command.

default username/password is admin/ExpertflowRNCR

Quick Links

Step 3: Create Namespace

All Expertflow components are deployed in a separate namespace inside the kubernetes called 'expertflow'.

1. Run the following command on the master node. Create the namespace using the command.

BASH
kubectl create namespace expertflow

2. All external components will be deployed in ef-external namespace. Run the following command on the master node.

BASH
kubectl create namespace ef-external

Step 4: Image Pull secret

1. For expertflow namespace, use the following command:

BASH
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-expertflow.yaml

2. Run the following command for ef-external namespace:

BASH
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-external.yaml


Step 5: Update the FQDN

1. Decide the FQDN to be used in your solution and change the <FQDN> to your actual FQDN as given in the following command:

BASH
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' cim/ConfigMaps/* pre-deployment/grafana/*  pre-deployment/keycloak/*  cim/Ingresses/traefik/* 


Replace FQDN with the name of your Master Node FQDN when deploying the solution on Single Control Plane node. For Multi-Control-plane setup, use VIP or FQDN associated with VIP

Expertflow CX External Components 

Following are the required external components that need to be deployed with Expertflow CX solution for workability:

1. PostgreSQL 

PostgreSQL is deployed as central datastore for both LicenseManager and Keycloak. 

1. Create configmap for PostgreSQL to load the LicenseManager database and create keycloak_db:

BASH
kubectl -n ef-external  create configmap ef-postgresql-license-manager-cm --from-file=./pre-deployment/licensemanager/licensemanager.sql

2. Helm command for postgreSQL for clusters as given below:

BASH
helm upgrade --install=true --wait=true  --timeout=10m0s  --debug --namespace=ef-external --values=external/bitnami/postgresql/values.yaml ef-postgresql external/bitnami/postgresql

2. Keycloak

1. On the master node, create a global configmap for keycloak. Change the hostname and other parameters before applying this command in ef-keycloak-configmap.yaml file:

BASH
kubectl apply -f pre-deployment/keycloak/ef-keycloak-configmap.yaml

2. The Helm command for Keycloak is given below:

BASH
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/keycloak/values.yaml keycloak  external/bitnami/keycloak/

3. You can check the status of deployment by using the following command:

BASH
kubectl -n ef-external rollout status sts keycloak

3. Mongo DB

1. Helm deployment for Mongo command is given below:

BASH
helm upgrade --install=true --wait=true --timeout=10m0s  --debug --namespace=ef-external --values=external/bitnami/mongodb/values.yaml mongo  external/bitnami/mongodb/

2. Check the Mongo deployment status by running the following command:

BASH
kubectl -n ef-external rollout status sts mongo-mongodb

4. MinIO

The Helm command for MinIO is as follows:

BASH
helm upgrade --install=true --wait=true --timeout=10m0s --debug  --namespace=ef-external --values=external/bitnami/minio/values.yaml minio  external/bitnami/minio/

5. Redis

The Helm command for deploying Redis is as follows:

BASH
helm upgrade --install=true --wait=true  --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/redis/values.yaml redis  external/bitnami/redis/

6. Grafana

Before proceeding with the step below, edit/update the post-deployment/config/grafana/supervisor-dashboards/datasource.yml file and change the database connection parameters correctly.

1. For reporting_stats_datasource

CODE
    type: mysql | mssql              => type of database mysql or mssql
    url: 192.168.1.182
								//eg (mysql://user:secret@host:port/databaseName or jdbc:mysql://localhost:3306/exampleDb?user=myuser&password=mypassword) 
									 => complete url address where the data source is accessible (optional)
    password: 68i3nj7t               => password
    user: elonmusk                   => username
    database: cim_etl_report   		 => database name
    host: 192.168.1.182        		 => Hostname or database server IP address where the data source is hosted.

2. For  supervisor_dashboard_cim_json_api

CODE
    url:                          => https://cim.expertflow.com  ==> ## FQDN of Machine (don't append slash (/) at the end)

3. Apply the datasource manifest.

CODE
kubectl -n ef-external create secret generic ef-grafana-datasource-secret --from-file=post-deployment/config/grafana/supervisor-dashboards/datasource.yml

4. Create the Dashboard configs for Grafana by applying configmap manifest.

CODE
kubectl create cm ef-grafana-dashboard-provider-cm -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/dashboard.yml


Please apply the config maps as per your database environment. Don't apply both config maps.


5. Apply configmap for the supervisor dashboard files using the steps below.

CODE
kubectl create configmap ef-grafana-supervisor-dashboard-mysql -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mysql.json

For release 4.0 or older

Rename the values under dashboardsConfigMaps object  in the external/bitnami/grafana/values.yaml to match the  following lines of code below.

CODE
dashboardsConfigMaps:
   ###################### DASHBOARD CONFIG TYPE (MSSQL/MYSQL)######################
 - configMapName: ef-grafana-supervisor-dashboard-mysql
   folderName: default
   fileName: Supervisor_Dashboard_CIM-mysql.json

For release CX-4.1 onwards

Use following the config map name and file name parameter for grafana

BASH
sed -i -e 's@configMapName: <configmap-map-name>@configMapName: ef-grafana-supervisor-dashboard-mysql@' external/bitnami/grafana/values.yaml

and

BASH
sed -i -e 's@fileName: <file-name>@fileName: Supervisor_Dashboard_CIM-mysql.json@' external/bitnami/grafana/values.yaml
ACTIONSCRIPT3
kubectl create configmap ef-grafana-supervisor-dashboard-mssql -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mssql.json

For release 4.0 or backward

Rename the values under dashboardsConfigMaps object in the external/bitnami/grafana/values.yaml to match the following lines of code below.

CODE
dashboardsConfigMaps:
   ###################### DASHBOARD CONFIG TYPE (MSSQL/MYSQL)######################
 - configMapName: ef-grafana-supervisor-dashboard-mssql
   folderName: default
   fileName: Supervisor_Dashboard_CIM-mssql.json


For release CX-4.2 and onwards

Use following the config map name and file name parameter for Grafana


BASH
sed -i -e 's@configMapName: <configmap-map-name>@configMapName: ef-grafana-supervisor-dashboard-mssql@' external/bitnami/grafana/values.yaml

and

BASH
sed -i -e 's@fileName: <file-name>@fileName: Supervisor_Dashboard_CIM-mssql.json@' external/bitnami/grafana/values.yaml

6. Create config-map for grafana.ini to store default values.

CODE
kubectl -n ef-external create configmap ef-grafana-ini-cm  --from-file=pre-deployment/grafana/grafana.ini

7. Use the following Helm command to deploy grafana:

CODE
helm upgrade --install=true --wait=true --timeout=10m0s  --debug  --namespace=ef-external --values=external/bitnami/grafana/values.yaml grafana  external/bitnami/grafana

Superset

Superset Deployment on the same node running the CIM deployment is not recommended. SuperSet must be installed on a separate node.


Superset will be deployed in ef-bi namespace. Run the following command on the master node.

CODE
kubectl create namespace ef-bi

create image pull secret for ef-bi namespace

CODE
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-bi.yaml

Run the following command to install Superset. 

CODE
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-bi --values external/superset/values.yaml superset external/superset/ 

Superset

Please wait for sometime to settle down the deployment, before executing port patching commands below.

Expose the Superset service from ClusterIP to NodePort by running the following command

CODE
kubectl patch svc superset -n ef-bi  --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'

Run the following command to see the higher port that is assigned to Superset.

CODE
kubectl -n ef-bi  get svc superset -o go-template='{{(index .spec.ports 0).nodePort}}';echo;

Now you can access the Superset on this port on any node. 

Superset on HTTPS protocol

If you want access the Superset on HTTPS protocol. Run the following command to change sub domain name.

CODE
sed -i 's/devops[0-9]*.ef.com/<subdomain.ef.com>/g'   cim/Ingresses/nginx/ef-superset-Ingress.yaml 


Apply the superset Ingress Route.

CODE
kubectl apply -f cim/Ingresses/nginx/ef-superset-Ingress.yaml 


Now generate a secret with the following commands.

please modify the <FQDN> with your current fqdn before applying this command.

BASH
openssl req -x509 \
-newkey rsa:4096 \
-sha256 \
-days 3650 \
-nodes \
-keyout <subdomain>.key \
-out <subdomain>.crt \
-subj "/CN=<subdomain>" \
-addext "subjectAltName=DNS:www.<subdomain>,DNS:<subdomain>"

Now generate a secret with the certificate files. You must have subdomain.key and subdomain.crt files available on the machine and in the correct directory.


CODE
kubectl -n ef-bi create secret tls ef-ingress-tls-secret-superset \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt  

RASA-X 

Rasa-X deployment on same node is not recommended . Install it on a separate node to deploy a production ready system. Deploying on the same node severely degrades the overall performance

create the rasa-x namespace using

CODE
kubectl create namespace rasa-x

add the image pull secrets for rasa-x

CODE
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-rasa-x.yaml

Add the Nginx customized config for RASA-X as Confirg-Map

kubectl create -f pre-deployment/rasa-x-1.1.2/ef-rasa-x-nginx-standard-conf.yaml

Update the FQDN parameter for RASA-X

change the <CIM-FQDN> with your actual FQDN for CIM Solution.

CODE
sed -i -e 's@value: http://devops.ef.com/bot-framework@value: https://<CIM-FQDN>/bot-framework@' external/rasa-x/values-small.yaml

and

CODE
sed -i -e 's@value: https://devops.ef.com@value: https://<CIM-FQDN>@' external/rasa-x/values-small.yaml

Install the rasa-x using helm command

CODE
helm  upgrade --install=true --wait=true --timeout=10m0s --debug   rasa-x    --namespace rasa-x     --values external/rasa-x/values-small.yaml  external/rasa-x

Rasa-x deployment may take longer than expected and allowable time due to dependancy on huge image sizes and may throw warning like Error: timed out waiting for the condition  , however the deployment continues. You can check the status of the deployment by running ` kubectl get pods -n rasa-x ` which shall give more recent information.

You RASA-X service is available through  HTTP 30800 NodePort on all the nodes' IP Addresses in your Cluster.

External Components Configurations

Please wait for the external components to be ready before proceeding with Expertflow CX deployment.

Check the status of external components 

CODE
#kubectl get pods -n ef-external
NAME                      READY   STATUS    RESTARTS        AGE
ef-postgresql-0           1/1     Running   0               14m
redis-master-0            1/1     Running   0               9m22s
minio-8f588955-6jvkh      1/1     Running   0               9m31s
keycloak-0                1/1     Running   0               10m
grafana-8b5494b86-4rp42   1/1     Running   0               7m18s
mongo-mongodb-0           1/1     Running   1 (3m16s ago)   9m41s

Once all the external components are in ready  state and the READY Column shows the required number of container is ready ( for example 1/1 means 1 out of 1 containers are up )

StatefulSet

 ActiveMQ should be  deployed before all other solution components. To deploy ActiveMQ asStatefulSet run

CODE
kubectl apply -f cim/StatefulSet/ef-amq-statefulset.yaml

Wait for the AMQ StatefulSet

CODE
kubectl wait pods ef-amq-0  -n ef-external   --for condition=Ready --timeout=600s

ConfigMaps

Conversation Manager ConfigMaps

If you need to change the default training, please update the corresponding files.

CODE
kubectl -n expertflow create configmap ef-conversation-controller-actions-cm --from-file=pre-deployment/conversation-Controller/actions
kubectl -n expertflow create configmap ef-conversation-controller-actions-pycache-cm --from-file=pre-deployment/conversation-Controller/__pycache__
kubectl -n expertflow create configmap ef-conversation-controller-actions-utils-cm --from-file=pre-deployment/conversation-Controller/utils

Reporting Connector ConfigMaps

Please update the "fqdn, browser_language, connection_type and Database server connection parameters"  in the file pre-deployment/reportingConnector/reporting-connector.conf and then deploy.

CODE
kubectl -n expertflow create configmap ef-reporting-connector-conf --from-file=pre-deployment/reportingConnector/reporting-connector.conf

kubectl -n expertflow create configmap ef-reporting-connector-cron --from-file=pre-deployment/reportingConnector/reporting-connector-cron

Unified Agent  ConfigMaps


 Translations for the unified agent are applicable in HC-4.1 and later releases.

CODE
kubectl -n expertflow  create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n

apply all the configmap in ConfigMaps folder using

kubectl apply -f cim/ConfigMaps/

Services

Create services for all deployment EF components

CODE
kubectl apply -f cim/Services/


Services must be created before Deployments

Deployments

apply all the Deployment manifests 

CODE
kubectl apply -f cim/Deployments/


Team Announcement CronJob

 Team announcement cron job is applicable in HC-4.2 and later releases.

CODE
kubectl apply -f pre-deployment/team-announcement/

Import your own certificates

Now generate a secret with the certificate files. You must have private.key and server.crt files available on the machine and in the correct directory.

for expertflow namespace:

CODE
kubectl -n expertflow create secret tls ef-ingress-tls-secret \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt  

and for ef-external namespace

CODE
kubectl -n ef-external create secret tls ef-ingress-tls-secret \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt  

Import your own certificates for RKE 

Now generate a secret with the following commands.

please modify the <FQDN> with your current fqdn before applying this command.

BASH
openssl req -x509 \
-newkey rsa:4096 \
-sha256 \
-days 3650 \
-nodes \
-keyout <fQDN>.key \
-out <FQDN>.crt \
-subj "/CN=<FQDN>" \
-addext "subjectAltName=DNS:www.<FQDN>,DNS:<FQDN>"

for expertflow namespace:

BASH
kubectl -n expertflow create secret tls ef-ingress-tls-secret --key  <fqdn>.key --cert <fqdn>.crt

and for ef-external namespace

BASH
kubectl -n ef-external  create secret tls ef-ingress-tls-secret --key  <fqdn>.key --cert <fqdn>.crt

Ingress

You need to apply the Ingress routes for ? 

For K3s-based deployments using Traefik Ingress Controller

CODE
kubectl apply -f cim/Ingresses/traefik/

For RKE2-based Ingresses using Ingress-Nginx Controller

Decide the FQDN to be used in your solution and change the <FQDN> in the below-given command to your actual FQDN

CODE
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g'    cim/Ingresses/nginx/*  

Apply the Ingress Routes.

BASH
kubectl apply -f cim/Ingresses/nginx/

Channel Manager Icons Bootstrapping

Once all expertflow service pods are completely  up and running, execute these steps for media channel icons to render successfully,

Run the minio-helper pod using

CODE
kubectl apply -f scripts/minio-helper.yaml

wait for the pod to start and copy the Media Icons from external folder to inside the help pod.

CODE
 kubectl -n ef-external --timeout=90s wait --for=condition=ready pod minio-helper

and wait for the response pod/minio-helper condition met 

Copy the files to the minio-helper pod.

CODE
kubectl -n ef-external cp post-deployment/data/minio/bucket/default minio-helper:/tmp/

Copy the icon-helper.sh script inside the minio-helper pod 

CODE
 kubectl -n ef-external cp scripts/icon-helper.sh minio-helper:/tmp/

execute the icon-helper.sh using

CODE
kubectl -n ef-external exec -it minio-helper -- /bin/sh /tmp/icon-helper.sh

delete the minio-helper pod

CODE
kubectl delete -f scripts/minio-helper.yaml

Chat Initiation URL

The  web-init-widget is now capable of calling the deployment of CIM from within the URL

CODE
https://{FQDN}/web-widget/cim-web-init-widget/?customerWidgetUrl=https://{FQDN}/customer-widget&widgetIdentifier=Web&serviceIdentifier=1122&channelCustomerIdentifier=1133

For the chat history, use the following URL

CODE
https://{FQDN}/web-widget/chat-transcript/

{FQDN}→  FQDN of Kubernetes Deployment 

Once all the deployments are successfully deployed, access the components to configure the solution. Keycloak is accessible at http://{cim-fqdn}/auth and unified-admin can be accessed using http://{cim-fqdn}/unified-admin and so on.












JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.