Skip to main content
Skip table of contents

Upgrade Guide CX4.10.3 to CX5.0 (Single Tenant/On-Prem)

Before upgrading, ensure that the system is idle, i.e., all agents are logged out of AgentDesk.

Make sure to FLUSH the Cache (Redis) before proceeding with the upgrade

Database Migrations

Follow this guide to migrate data for MongoDB.
The license manager in PostgreSQL: lis-manager-mtt-migration.sql

Follow these steps to execute the above SQL script

CODE
# Check PostgreSQL Helm release status
helm -n ef-external status ef-postgresql

# This will output some instructions. Follow the next steps based on that output.

# Get the password for "sa"
# (Copy and run the command that starts with "export" shown after this line in the Helm output)
export POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d)

# Connect to PostgreSQL
# Copy the 3-line command shown in the Helm output after the text:
# "To connect to your database run the following command:"
# Example:
kubectl run ef-postgresql-client --rm --tty -i --restart='Never' \
  --namespace ef-external --image gitimages.expertflow.com/general/postgresql:14.5.0-debian-11-r21 \
  --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql \
  --host ef-postgresql -U sa -d licenseManager

# Verify you’re inside the PostgreSQL client
# You should now see a prompt like this:
# licenseManager=>
# If not, press Enter once.

# Copy your .sql script into the PostgreSQL pod (if it’s on your local machine)
# Replace ef-postgresql-client with the actual pod name if it differs.
kubectl cp ./lis-manager-mtt-migration.sql ef-postgresql-client:/tmp/lis-manager-mtt-migration.sql -n ef-external

# Get inside the PostgreSQL client
kubectl exec -it -n ef-external ef-postgresql-client -- env PGPASSWORD=$POSTGRES_PASSWORD psql --host ef-postgresql -U sa -d licenseManager

# Inside the PostgreSQL client, execute your .sql script:
\i /tmp/lis-manager-mtt-migration.sql

# Verify that the table was created
\dt;

# Exit the PostgreSQL client
\q

Custom Configuration Strategy

For detailed guidelines on applying environment-specific configurations using custom values.yaml layering,

Refer to the CX Helm Chart Custom Configuration Strategy guide.

Refer to this guide to see the complete change log

1. Update Helm Repo

BASH
helm repo update expertflow

2. Clone the CX Repository on the Target Server

BASH
# Create CX-5.0 directory from root
mkdir CX-5.0

# Navigate to CX-5.0
cd CX-5.0

# Clone the CX-5.0 branch of cim-solution repository
git clone -b CX-5.0 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git $HOME/CX-5.0

# Navigate to root (previous directory)
cd ..

3. Change Directory to the Current Deployment

BASH
cd CX-4.10.3/kubernetes

4. Apply Redis ACL Secret

BASH
kubectl -n ef-external create secret generic ef-redis-acl-secret --from-literal=superuser=Expertflow464

5. Deploy Redis

BASH
helm show values expertflow/redis --version 5.0 > helm-values/ef-redis-custom-values.yaml
helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-redis-custom-values.yaml redis expertflow/redis --version 5.0

6. Deploy APISIX

CODE
helm show values expertflow/apisix --version 5.0  > helm-values/apisix-custom-values.yaml

update the apisix-custom-values.yaml file for given parameters

CODE
global:
  ingressRouter: <FQDN>   # replace FQDN
  ingressClassName: "nginx"
  ingressTlsCertName: "ef-ingress-tls-secret"
CODE
helm upgrade --install --namespace ef-external --values helm-values/apisix-custom-values.yaml apisix expertflow/apisix --version 5.0

7. Deploy CX Bus (ActiveMQ)

BASH
helm show values expertflow/activemq --version 5.0 > helm-values/ef-activemq-custom-values.yaml
helm upgrade --install=true  --namespace=ef-external --values=helm-values/ef-activemq-custom-values.yaml activemq expertflow/activemq --version 5.0

8. Deploy Vault

BASH
helm show values expertflow/vault --version 5.0 > helm-values/ef-vault-custom-values.yaml
helm uninstall -n vault vault

kubectl get secret mongo-mongodb-ca -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: vault/' | kubectl create -f -

helm upgrade --install --namespace vault --create-namespace vault --debug --values helm-values/ef-vault-custom-values.yaml expertflow/vault --version 5.0

🔗 Use the following guide: Vault Dynamic Database Configuration

9. Copy Schemas and Rule Files for Routing Engine

BASH
# Copy Routing Engine Directory
cp -r ../../CX-5.0/kubernetes/pre-deployment/routing-engine pre-deployment/

10. Apply ConfigMaps for the Routing Engine

BASH
kubectl create configmap -n expertflow routing-engine-graphql-schemas --from-file=./pre-deployment/routing-engine/graphql/schemas
kubectl create configmap -n expertflow routing-engine-graphql-memory-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-memory-rules.json
kubectl create configmap -n expertflow routing-engine-graphql-redis-rules --from-file=./pre-deployment/routing-engine/graphql/graphql-redis-rules.json

11. Upgrade CX Core

In case if you don’t have any custom training/flows for Conversation Studio then please delete its PVC using the following commands

CODE
#Execute the followig 1st command and wait until the Conversation Studio pod is removed before executing the 2nd command.
kubectl delete -n expertflow statefulset ef-cx-conversation-studio 
kubectl delete -n expertflow pvc/conversation-studio-flow-vol-ef-cx-conversation-studio-0 

In ef-cx-custom-values.yaml add the following:

CODE
efConnectionVars:
    ROOT_DOMAIN: "expertflow" 
# under RealTime Reporting siteEnvVars, update the DATASOURCE_URL value to exclude the SQL db name, as its not required in 5.0
  realtime-reports:
    siteEnvVars:
    - name: DATASOURCE_URL
      value:  "jdbc:mysql://<ip>:3306"
# add the following env under conversation-studio
  conversation-studio:
    siteEnvVars:
      - name: TENANT_ID
        value: "expertflow"
    
BASH
helm upgrade --install --namespace expertflow --create-namespace ef-cx --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 5.0

Survey is now a part of Conversation-Studio in CX-Core so current deployment for it should be removed.

CODE
helm uninstall -n expertflow cx-surveys

Please follow this guide to configure the survey node in Conversation-Studio: Configuration Guide

12. Upgrade Agent Desk

BASH
# Copy translation files
cp -r ../../CX-5.0/kubernetes/pre-deployment/app-translations/unified-agent/i18n pre-deployment/app-translations/unified-agent
CODE
kubectl -n expertflow delete configmap ef-app-translations-cm
CODE
kubectl -n expertflow create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n
BASH
# Copy supervisor dashboard file
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM.json post-deployment/config/grafana/supervisor-dashboards

# Copy agent dashboard file
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM.json post-deployment/config/grafana/supervisor-dashboards

# Copy agent teams dashboard
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/agent_teams_dashboard.json post-deployment/config/grafana/supervisor-dashboards/agent_teams_dashboard.json

# Copy agent performance dashboard
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/agent_performance_dashboard.json post-deployment/config/grafana/supervisor-dashboards/agent_performance_dashboard.json

# Copy social media performance dashboard
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/social_media_performance_trend_dashboard.json post-deployment/config/grafana/supervisor-dashboards/social_media_performance_trend_dashboard.json

# Copy team statistics dashboard
cp ../../CX-5.0/kubernetes/post-deployment/config/grafana/supervisor-dashboards/team_statistics_dashboard.json post-deployment/config/grafana/supervisor-dashboards/team_statistics_dashboard.json 
CODE
kubectl -n expertflow delete configmap ef-grafana-supervisor-dashboard
kubectl -n expertflow create configmap ef-grafana-supervisor-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM.json
CODE
kubectl -n expertflow delete configmap ef-grafana-agent-dashboard
kubectl -n expertflow create configmap ef-grafana-agent-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM.json
CODE
kubectl -n expertflow delete configmap ef-grafana-agent-teams-dashboard
kubectl -n expertflow create configmap ef-grafana-agent-teams-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_teams_dashboard.json
CODE
kubectl -n expertflow delete configmap ef-grafana-agent-performance-dashboard
kubectl -n expertflow create configmap ef-grafana-agent-performance-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/agent_performance_dashboard.json
CODE
kubectl -n expertflow delete configmap ef-grafana-social-media-performance-trend-dashboard
kubectl -n expertflow create configmap ef-grafana-social-media-performance-trend-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/social_media_performance_trend_dashboard.json
CODE
kubectl -n expertflow delete configmap ef-grafana-team-statistics-dashboard
kubectl -n expertflow create configmap ef-grafana-team-statistics-dashboard --from-file=post-deployment/config/grafana/supervisor-dashboards/team_statistics_dashboard.json

Add the following post-deployment/config/grafana/supervisor-dashboards/datasource.yml

CODE
############################################  INFINITY API PLUGIN CONFIGURATION ##########################################
  - name: infinity_cim_json_api
    jsonData:
      allowedHosts: #Add the FQDN of the registered Tenants
        - "https://example1.com"

Create an SQL database for the tenant with the <tenant name> and in datasource.yml update the SQL database name to this.

Reapply the datasource.yml file

CODE
kubectl -n expertflow delete secret generic ef-grafana-datasource-secret
kubectl -n expertflow create secret generic ef-grafana-datasource-secret --from-file=post-deployment/config/grafana/supervisor-dashboards/datasource.yml

Add the following in cx-agent-desk-custom-values.yaml and replace IP with host IP

CODE
grafana:  
  hostAliases: 
    - ip: "<IP>"
      hostnames:
        - "{{ .Values.global.ingressRouter }}"
CODE
helm uninstall -n expertflow cx-agent-desk
BASH
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-agent-desk --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 5.0

13. Upgrade Channels

BASH
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" --debug cx-channels --values helm-values/cx-channels-custom-values.yaml expertflow/channels --version 5.0

14. Upgrade Campaigns

BASH
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 5.0

15. Upgrade QM

BASH
helm upgrade --install --namespace=expertflow --set global.efCxReleaseName="ef-cx" qm  --debug --values=helm-values/cx-qm-custom-values.yaml expertflow/qm --version 5.0

16. Reporting Configuration

Add the following line to pre-deployment/reportingConnector/reporting-connector.conf:

BASH
tenant_id = expertflow

Update the following parameters as well

conversation_manager_db_name

expertflow

bot_framework_db_name

expertflow

ccm_db_name

expertflow

routing_engine_db_name

expertflow

cim_customer_db_name

expertflow

business_calendars_db_name

expertflow

state_events_logger_db_name

expertflow

admin_panel_db_name

expertflow

Also update the SQL database name

CODE
sql_database_name=expertflow 

Recreate the ConfigMap:

BASH
kubectl -n expertflow delete configmap ef-reporting-connector-conf-cm
kubectl -n expertflow create configmap ef-reporting-connector-conf-cm --from-file=pre-deployment/reportingConnector/reporting-connector.conf

Upgrade Reporting:

BASH
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-reporting --debug --values helm-values/cx-reporting-scheduler-custom-values.yaml expertflow/reporting --version 5.0

17. CiscoSyncService

CODE
vi helm-values/cx-ciscosyncservice-custom-values.yaml

In the opened file, add the following section to define the FQDN (Fully Qualified Domain Name) for ingress routing:

CODE
global:   
  ingressRouter: <CUSTOM-FQDN> 
siteEnvVars:
    - name: AUTH_SERVER_URL
      value: "https://{{ .Values.global.ingressRouter  }}/auth/"
    - name: EF_SERVER_URL
      value: "https://{{ .Values.global.ingressRouter  }}/unified-admin/"   

Deploy the CiscoSyncService helm chart by

CODE
helm upgrade --install --namespace expertflow  --set global.efCxReleaseName="ef-cx"  cisco-sync-service  --values helm-values/cx-ciscosyncservice-custom-values.yaml expertflow/cisco-sync-service --version 5.0 

18. Metabase

Navigate to the CX-5.0/kubernetes/ directory:

BASH
cd CX-5.0/kubernetes/
Step 1: Create Custom Values File
BASH
vi helm-values/metabase-custom-values.yaml
Step 2: Add Minimum Required Configuration
YAML
global:
  ingressRouter: <CUSTOM-FQDN>

🔁 Replace <CUSTOM-FQDN> with your actual domain, e.g. devops.example.com.

Step 3: Deploy
BASH
helm upgrade --install=true --debug --namespace=metabase --create-namespace --values=helm-values/metabase-custom-values.yaml metabase expertflow/metabase --version 5.0-alpha

Check Pod

BASH
kubectl get pod -n metabase

Access Metabase

Open in browser:

CODE
https://<FQDN>/metabase

You should see the Metabase setup page.

Follow this guide to complete the setup.

Login credentials:

CODE
Email: superadmin@admin.com

19. EFCX-Bootstrapping

Webhooks Registration

Add webhooks information in the mongo database for the components that require bootstrapping upon tenant registration.

  • Export mongo certs using:

CODE
mkdir /tmp/mongodb_certs
CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
for f in ${CERTFILES[*]}; do   kubectl get secret mongo-mongodb-ca  -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v  | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done
  • Go inside kubernetes directory and execute the command to import data inside the webhook collections. Following are commands

CODE
cd CX-5.0/kubernetes
CODE
kubectl -n ef-external run mongo-tools --image=mongo:6.0 --restart=Never -- sleep 3600
CODE
kubectl -n ef-external cp ./post-deployment/cim-tenant.webhooks.json mongo-tools:/tmp/cim-tenant.webhooks.json
kubectl -n ef-external cp /tmp/mongodb_certs/mongodb-ca-cert mongo-tools:/tmp/mongodb-ca-cert
kubectl -n ef-external cp /tmp/mongodb_certs/client-pem mongo-tools:/tmp/combined.pem
CODE
kubectl -n ef-external exec mongo-tools -- \
  mongoimport \
  --host mongo-mongodb.ef-external.svc.cluster.local \
  --port 27017 \
  --db cim-tenant \
  --collection webhooks \
  --file /tmp/cim-tenant.webhooks.json \
  --jsonArray \
  --ssl \
  --sslCAFile /tmp/mongodb-ca-cert \
  --sslPEMKeyFile /tmp/combined.pem \
  --username root \
  --password Expertflow123 \
  --authenticationDatabase admin
CODE
kubectl -n ef-external delete pod mongo-tools

20. Tenant Onboarding

Step 1: Import the tenant configurations

  • Create tenant-config directory on root and place both of these files in it: import_tenant.sh cim-tenant.tenants.json

  • Update the dialer variables in cim-tenant.tenants.json file according to the tenant’s environment

  • Use the commands below to execute the script

CODE
cd /root/tenant-config
CODE
chmod +x import_tenant.sh
CODE
./import_tenant.sh <tenantId> <FQDN>

Here, tenantId and FQDN as supplied as arguments to the script

Step 2: Register Tenant FQDN on Grafana allowedHosts

Follow the steps in this guide.

21. CX Voice

Follow this guide to upgrade CX Voice to 5.0

22. VRS Upgrade

Follow this guide to upgrade VRS from 14.5 to 14.6

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.