Expertflow CX Deployment on Kubernetes
Step 3: Create Namespace
All Expertflow components are deployed in a separate namespace inside the kubernetes called 'expertflow
'.
1. Run the following command on the master node. Create the namespace using the command.
kubectl create namespace expertflow
2. All external components will be deployed in ef-external
namespace. Run the following command on the master node.
kubectl create namespace ef-external
Step 4: Image Pull secret
1. For expertflow namespace, use the following command:
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-expertflow.yaml
2. Run the following command for ef-external namespace:
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-external.yaml
Step 5: Update the FQDN
1. Decide the FQDN to be used in your solution and change the <FQDN> to your actual FQDN as given in the following command:
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' cim/ConfigMaps/* pre-deployment/grafana/* pre-deployment/keycloak/* cim/Ingresses/traefik/*
Replace FQDN with the name of your Master Node FQDN when deploying the solution on Single Control Plane node. For Multi-Control-plane setup, use VIP or FQDN associated with VIP
Expertflow CX External Components
Following are the required external components that need to be deployed with Expertflow CX solution for workability:
1. PostgreSQL
PostgreSQL is deployed as central datastore for both LicenseManager and Keycloak.
1. Create configmap for PostgreSQL to load the LicenseManager database and create keycloak_db:
kubectl -n ef-external create configmap ef-postgresql-license-manager-cm --from-file=./pre-deployment/licensemanager/licensemanager.sql
2. Helm command for postgreSQL for clusters as given below:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/postgresql/values.yaml ef-postgresql external/bitnami/postgresql
2. Keycloak
1. On the master node, create a global configmap for keycloak. Change the hostname and other parameters before applying this command in ef-keycloak-configmap.yaml
file:
kubectl apply -f pre-deployment/keycloak/ef-keycloak-configmap.yaml
2. The Helm command for Keycloak is given below:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/keycloak/values.yaml keycloak external/bitnami/keycloak/
3. You can check the status of deployment by using the following command:
kubectl -n ef-external rollout status sts keycloak
3. Mongo DB
1. Helm deployment for Mongo command is given below:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/mongodb/values.yaml mongo external/bitnami/mongodb/
2. Check the Mongo deployment status by running the following command:
kubectl -n ef-external rollout status sts mongo-mongodb
4. MinIO
The Helm command for MinIO is as follows:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/minio/values.yaml minio external/bitnami/minio/
5. Redis
The Helm command for deploying Redis is as follows:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/redis/values.yaml redis external/bitnami/redis/
6. Grafana
Before proceeding with the step below, edit/update the post-deployment/config/grafana/supervisor-dashboards/datasource.yml
file and change the database connection parameters correctly.
1. For reporting_stats_datasource
type: mysql | mssql => type of database mysql or mssql
url: 192.168.1.182
//eg (mysql://user:secret@host:port/databaseName or jdbc:mysql://localhost:3306/exampleDb?user=myuser&password=mypassword)
=> complete url address where the data source is accessible (optional)
password: 68i3nj7t => password
user: elonmusk => username
database: cim_etl_report => database name
host: 192.168.1.182 => Hostname or database server IP address where the data source is hosted.
2. For supervisor_dashboard_cim_json_api
url: => https://cim.expertflow.com ==> ## FQDN of Machine (don't append slash (/) at the end)
3. Apply the datasource manifest.
kubectl -n ef-external create secret generic ef-grafana-datasource-secret --from-file=post-deployment/config/grafana/supervisor-dashboards/datasource.yml
4. Create the Dashboard configs for Grafana by applying configmap manifest.
kubectl create cm ef-grafana-dashboard-provider-cm -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/dashboard.yml
Please apply the config maps as per your database environment. Don't apply both config maps.
5. Apply configmap for the supervisor dashboard files using the steps below.
kubectl create configmap ef-grafana-supervisor-dashboard-mysql -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mysql.json
For release 4.0 or older
Rename the values under dashboardsConfigMaps object in the external/bitnami/grafana/values.yaml to match the following lines of code below.
dashboardsConfigMaps:
###################### DASHBOARD CONFIG TYPE (MSSQL/MYSQL)######################
- configMapName: ef-grafana-supervisor-dashboard-mysql
folderName: default
fileName: Supervisor_Dashboard_CIM-mysql.json
For release CX-4.1 onwards
Use following the config map name and file name parameter for grafana
sed -i -e 's@configMapName: <configmap-map-name>@configMapName: ef-grafana-supervisor-dashboard-mysql@' external/bitnami/grafana/values.yaml
and
sed -i -e 's@fileName: <file-name>@fileName: Supervisor_Dashboard_CIM-mysql.json@' external/bitnami/grafana/values.yaml
kubectl create configmap ef-grafana-supervisor-dashboard-mssql -n ef-external --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mssql.json
For release 4.0 or backward
Rename the values under dashboardsConfigMaps object in the external/bitnami/grafana/values.yaml to match the following lines of code below.
dashboardsConfigMaps:
###################### DASHBOARD CONFIG TYPE (MSSQL/MYSQL)######################
- configMapName: ef-grafana-supervisor-dashboard-mssql
folderName: default
fileName: Supervisor_Dashboard_CIM-mssql.json
For release CX-4.2 and onwards
Use following the config map name and file name parameter for Grafana
sed -i -e 's@configMapName: <configmap-map-name>@configMapName: ef-grafana-supervisor-dashboard-mssql@' external/bitnami/grafana/values.yaml
and
sed -i -e 's@fileName: <file-name>@fileName: Supervisor_Dashboard_CIM-mssql.json@' external/bitnami/grafana/values.yaml
6. Create config-map for grafana.ini to store default values.
kubectl -n ef-external create configmap ef-grafana-ini-cm --from-file=pre-deployment/grafana/grafana.ini
7. Use the following Helm command to deploy grafana:
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-external --values=external/bitnami/grafana/values.yaml grafana external/bitnami/grafana
Superset
Superset Deployment on the same node running the CIM deployment is not recommended. SuperSet must be installed on a separate node.
Superset will be deployed in ef-bi namespace. Run the following command on the master node.
kubectl create namespace ef-bi
create image pull secret for ef-bi namespace
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-ef-bi.yaml
Run the following command to install Superset.
helm upgrade --install=true --wait=true --timeout=10m0s --debug --namespace=ef-bi --values external/superset/values.yaml superset external/superset/
Superset
Please wait for sometime to settle down the deployment, before executing port patching commands below.Expose the Superset service from ClusterIP to NodePort by running the following command
kubectl patch svc superset -n ef-bi --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
Run the following command to see the higher port that is assigned to Superset.
kubectl -n ef-bi get svc superset -o go-template='{{(index .spec.ports 0).nodePort}}';echo;
Now you can access the Superset on this port on any node.
Superset on HTTPS protocol
If you want access the Superset on HTTPS protocol. Run the following command to change sub domain name.
sed -i 's/devops[0-9]*.ef.com/<subdomain.ef.com>/g' cim/Ingresses/nginx/ef-superset-Ingress.yaml
Apply the superset Ingress Route.
kubectl apply -f cim/Ingresses/nginx/ef-superset-Ingress.yaml
Now generate a secret with the following commands.
please modify the <FQDN> with your current fqdn before applying this command.
openssl req -x509 \
-newkey rsa:4096 \
-sha256 \
-days 3650 \
-nodes \
-keyout <subdomain>.key \
-out <subdomain>.crt \
-subj "/CN=<subdomain>" \
-addext "subjectAltName=DNS:www.<subdomain>,DNS:<subdomain>"
Now generate a secret with the certificate files. You must have subdomain.key and subdomain.crt files available on the machine and in the correct directory.
kubectl -n ef-bi create secret tls ef-ingress-tls-secret-superset \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt
RASA-X
Rasa-X deployment on same node is not recommended . Install it on a separate node to deploy a production ready system. Deploying on the same node severely degrades the overall performance
create the rasa-x namespace using
kubectl create namespace rasa-x
add the image pull secrets for rasa-x
kubectl apply -f pre-deployment/registryCredits/ef-imagePullSecret-rasa-x.yaml
Add the Nginx customized config for RASA-X as Confirg-Map
|
Update the FQDN parameter for RASA-X
change the <CIM-FQDN> with your actual FQDN for CIM Solution.
sed -i -e 's@value: http://devops.ef.com/bot-framework@value: https://<CIM-FQDN>/bot-framework@' external/rasa-x/values-small.yaml
and
sed -i -e 's@value: https://devops.ef.com@value: https://<CIM-FQDN>@' external/rasa-x/values-small.yaml
Install the rasa-x using helm command
helm upgrade --install=true --wait=true --timeout=10m0s --debug rasa-x --namespace rasa-x --values external/rasa-x/values-small.yaml external/rasa-x
Rasa-x deployment may take longer than expected and allowable time due to dependancy on huge image sizes and may throw warning like Error: timed out waiting for the condition
, however the deployment continues. You can check the status of the deployment by running ` kubectl get pods -n rasa-x ` which shall give more recent information.
You RASA-X service is available through HTTP 30800 NodePort on all the nodes' IP Addresses in your Cluster.
External Components Configurations
Please wait for the external components to be ready before proceeding with Expertflow CX deployment.
Check the status of external components
#kubectl get pods -n ef-external
NAME READY STATUS RESTARTS AGE
ef-postgresql-0 1/1 Running 0 14m
redis-master-0 1/1 Running 0 9m22s
minio-8f588955-6jvkh 1/1 Running 0 9m31s
keycloak-0 1/1 Running 0 10m
grafana-8b5494b86-4rp42 1/1 Running 0 7m18s
mongo-mongodb-0 1/1 Running 1 (3m16s ago) 9m41s
Once all the external components are in ready state and the READY Column shows the required number of container is ready ( for example 1/1 means 1 out of 1 containers are up )
StatefulSet
ActiveMQ should be deployed before all other solution components. To deploy ActiveMQ asStatefulSet run
kubectl apply -f cim/StatefulSet/ef-amq-statefulset.yaml
Wait for the AMQ StatefulSet
kubectl wait pods ef-amq-0 -n ef-external --for condition=Ready --timeout=600s
ConfigMaps
Conversation Manager ConfigMaps
If you need to change the default training, please update the corresponding files.
kubectl -n expertflow create configmap ef-conversation-controller-actions-cm --from-file=pre-deployment/conversation-Controller/actions
kubectl -n expertflow create configmap ef-conversation-controller-actions-pycache-cm --from-file=pre-deployment/conversation-Controller/__pycache__
kubectl -n expertflow create configmap ef-conversation-controller-actions-utils-cm --from-file=pre-deployment/conversation-Controller/utils
Reporting Connector ConfigMaps
Please update the "fqdn, browser_language, connection_type and Database server connection parameters" in the file pre-deployment/reportingConnector/reporting-connector.conf
and then deploy.
kubectl -n expertflow create configmap ef-reporting-connector-conf --from-file=pre-deployment/reportingConnector/reporting-connector.conf
kubectl -n expertflow create configmap ef-reporting-connector-cron --from-file=pre-deployment/reportingConnector/reporting-connector-cron
Unified Agent ConfigMaps
Translations for the unified agent are applicable in HC-4.1 and later releases.
kubectl -n expertflow create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n
apply all the configmap in ConfigMaps folder using
|
Services
Create services for all deployment EF components
kubectl apply -f cim/Services/
Services must be created before Deployments
Deployments
apply all the Deployment manifests
kubectl apply -f cim/Deployments/
Team Announcement CronJob
Team announcement cron job is applicable in HC-4.2 and later releases.
kubectl apply -f pre-deployment/team-announcement/
Import your own certificates
Now generate a secret with the certificate files. You must have private.key and server.crt files available on the machine and in the correct directory.
for expertflow namespace:
kubectl -n expertflow create secret tls ef-ingress-tls-secret \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt
and for ef-external namespace
kubectl -n ef-external create secret tls ef-ingress-tls-secret \
--key pre-deployment/certificates/server.key \
--cert pre-deployment/certificates/server.crt
Import your own certificates for RKE
Now generate a secret with the following commands.
please modify the <FQDN> with your current fqdn before applying this command.
openssl req -x509 \
-newkey rsa:4096 \
-sha256 \
-days 3650 \
-nodes \
-keyout <fQDN>.key \
-out <FQDN>.crt \
-subj "/CN=<FQDN>" \
-addext "subjectAltName=DNS:www.<FQDN>,DNS:<FQDN>"
for expertflow namespace:
kubectl -n expertflow create secret tls ef-ingress-tls-secret --key <fqdn>.key --cert <fqdn>.crt
and for ef-external namespace
kubectl -n ef-external create secret tls ef-ingress-tls-secret --key <fqdn>.key --cert <fqdn>.crt
Ingress
You need to apply the Ingress routes for ?
For K3s-based deployments using Traefik Ingress Controller
kubectl apply -f cim/Ingresses/traefik/
For RKE2-based Ingresses using Ingress-Nginx Controller
Decide the FQDN to be used in your solution and change the <FQDN> in the below-given command to your actual FQDN
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' cim/Ingresses/nginx/*
Apply the Ingress Routes.
kubectl apply -f cim/Ingresses/nginx/
Channel Manager Icons Bootstrapping
Once all expertflow service pods are completely up and running, execute these steps for media channel icons to render successfully,
Run the minio-helper pod using
kubectl apply -f scripts/minio-helper.yaml
wait for the pod to start and copy the Media Icons from external folder to inside the help pod.
kubectl -n ef-external --timeout=90s wait --for=condition=ready pod minio-helper
and wait for the response pod/minio-helper condition met
Copy the files to the minio-helper pod.
kubectl -n ef-external cp post-deployment/data/minio/bucket/default minio-helper:/tmp/
Copy the icon-helper.sh script inside the minio-helper pod
kubectl -n ef-external cp scripts/icon-helper.sh minio-helper:/tmp/
execute the icon-helper.sh using
kubectl -n ef-external exec -it minio-helper -- /bin/sh /tmp/icon-helper.sh
delete the minio-helper pod
kubectl delete -f scripts/minio-helper.yaml
Chat Initiation URL
The web-init-widget is now capable of calling the deployment of CIM from within the URL
https://{FQDN}/web-widget/cim-web-init-widget/?customerWidgetUrl=https://{FQDN}/customer-widget&widgetIdentifier=Web&serviceIdentifier=1122&channelCustomerIdentifier=1133
For the chat history, use the following URL
https://{FQDN}/web-widget/chat-transcript/
{FQDN}→ FQDN of Kubernetes Deployment
Once all the deployments are successfully deployed, access the components to configure the solution. Keycloak is accessible at http://{cim-fqdn}/auth and unified-admin can be accessed using http://{cim-fqdn}/unified-admin and so on.