Before upgrading, ensure that the system is idle, i.e; all agents are logged out from the AgentDesk.
Make sure the system is idle for 30 minutes, to sync the reporting data.
-
Clone the CX repository on the target server
# Create CX-4.9 directory from root mkdir CX-4.9 # Navigate to CX-4.9 cd CX-4.9 # Clone the CX-4.9 branch of cim-solution repository git clone -b CX-4.9 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/cim-solution.git $HOME/CX-4.9 # Navigate to root(previous) directory cd .. -
Update helm repo
helm repo update expertflow -
Change the directory to the current deployment of CX
change directory to following path in current deployment cd CX-4.8/kubernetes -
Data migration for routing engine
-
place this in transflux/configdata_migration_config.yaml
#change the directory to transflux cd transflux #Delete configuration ConfigMaps for CX-Transflux pipelines. kubectl -n expertflow delete configmap ef-transflux-config-cm #Create configuration ConfigMaps for CX-Transflux pipelines. kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config #Change transflux tag vi helm-values/cx-transflux-custom-values.yaml tag: 4.9 #Re-deploy CX-Transflux helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux --version 4.9.0 -
Access the data platform from the dedicated FQDN, un-pause the pipeline "Routing_Engine_Migration" (slide bar on the left side) and trigger it (play button on the right side).
-
-
Update the Unified-Agent Translation ConfigMaps
#copy the unified agent translation directory to CX-4.8 release 1) Copy From CX-4.9/kubernetes/pre-deployment/app-translations/unified-agent/i18n To pre-deployment/app-translations/unified-agent 2) Delete and Create the ConfigMap kubectl delete cm ef-app-translations-cm -n expertflow kubectl -n expertflow create configmap ef-app-translations-cm --from-file=pre-deployment/app-translations/unified-agent/i18n/ -
Update Conversation Controller ConfigMaps
#copy actions directory to current release 1) Copy From CX-4.9/kubernetes/pre-deployment/conversation-Controller/actions To /pre-deployment/conversation-Controller 2) Delete and Create the ConfigMap kubectl -n expertflow delete configmap ef-conversation-controller-actions-cm kubectl -n expertflow create configmap ef-conversation-controller-actions-cm --from-file=pre-deployment/conversation-Controller/actions -
Update the ActiveMQ helm chart
#Update Core Component helm chart Edit/update the values file helm-values/ef-activemq-custom-values.yaml with tag : 6.0.0-alpine-zulu-K8s-4.9 helm upgrade --install=true --namespace=ef-external --values=helm-values/ef-activemq-custom-values.yaml activemq expertflow/activemq --version 4.9.0 -
Update the grafana deployment
helm -n expertflow uninstall cx-agent-desk #delete dashboard file for MYSQL kubectl -n expertflow delete configmap ef-grafana-supervisor-dashboard-mysql kubectl -n expertflow delete configmap ef-grafana-agent-dashboard-mysql #delete dashboard file for MSSQL kubectl -n expertflow delete configmap ef-grafana-supervisor-dashboard-mssql kubectl -n expertflow delete configmap ef-grafana-agent-dashboard-mssql #copy MSSQL file to current release Copy from CX-4.9/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM-mssql.json To post-deployment/config/grafana/supervisor-dashboards Copy from CX-4.9/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mssql.json To post-deployment/config/grafana/supervisor-dashboards #copy MYSQL file to current release Copy from CX-4.9/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM-mysql.json To post-deployment/config/grafana/supervisor-dashboards Copy from CX-4.9/kubernetes/post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mysql.json To post-deployment/config/grafana/supervisor-dashboards #create dashboard file for MYSQL kubectl -n expertflow create configmap ef-grafana-supervisor-dashboard-mysql --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mysql.json kubectl -n expertflow create configmap ef-grafana-agent-dashboard-mysql --from-file=post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM-mysql.json #create dashboard file for MSSQL kubectl -n expertflow create configmap ef-grafana-supervisor-dashboard-mssql --from-file=post-deployment/config/grafana/supervisor-dashboards/Supervisor_Dashboard_CIM-mssql.json kubectl -n expertflow create configmap ef-grafana-agent-dashboard-mssql --from-file=post-deployment/config/grafana/supervisor-dashboards/Agent_Dashboard_CIM-mssql.json -
Update the CX-4.8 core helm chart
#Update Core Component helm chart Edit/update the values file helm-values/ef-cx-custom-values.yaml with Add the following variable in Routing engine under extraEnvVar - name: IS_MRD_AUTO_SYNC_ENABLED value: "false" Add the following variable in Customer Widget under extraEnvVar - name: MUTE_NOTIFICATIONS value: "false" Add the following variable in file engine under extraEnvVar - name: CLAMSCAN_HOST value: clamav.ef-external.svc - name: CLAMSCAN_PORT value: "3310" - name: CLAMSCAN_RELOAD_DB value: "false" - name: IS_SCAN_ENABLED value: "true" Agent-Manager Tag:4.9 Unified-Admin Tag:4.9 Cim Backend Tag:4.9 Routing-Engine Tag: 4.9 Customer-Channel-Manager Tag:4.9 Bot-Framework Tag : 4.9 State Event Logger Tag : 4.9 Historical-Reporting Tag:4.9 Realtime-Reporting Tag : 4.9 Conversation-Manager Tag : 4.9 Conversation-Monitor Tag : 4.9 File-Engine Tag : 4.9 customer-widget Tag:4.9 helm upgrade --install --namespace expertflow --create-namespace ef-cx --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 4.9.0 -
Update digital Channel Icons
kubectl apply -f scripts/minio-helper.yaml kubectl -n ef-external --timeout=90s wait --for=condition=ready pod minio-helper copy CX-4.9/kubernetes/post-deployment/data/minio/bucket/default to current release post-deployment/data/minio/bucket/default kubectl -n ef-external cp post-deployment/data/minio/bucket/default minio-helper:/tmp/ kubectl -n ef-external cp scripts/icon-helper.sh minio-helper:/tmp/ kubectl -n ef-external exec -it minio-helper -- /bin/sh /tmp/icon-helper.sh kubectl delete -f scripts/minio-helper.yaml -
Create a
linkedinmetadataDatabase in Postgres# Exec into postgresql and run the create database command given below kubectl exec -it ef-postgresql-0 -n ef-external -- psql -U <username> -d postgres -c "CREATE DATABASE linkedinmetadata;" // Enter postgres password -
Update the Channel helm chart
#Update Channel helm chart Edit/update the values file helm-values/cx-channels-custom-values.yaml with #add the new component email-connector email-connector: enabled: true replicaCount: 1 image: repository: cim/email-connector tag: 4.9 efConnectionVars: true efEnvironmentVars: false containerPorts: - name: "http-em-co-8080" containerPort: 8080 extraEnvVars: - name: TZ value: '{{ .Values.global.efCommonVars_TZ }}' - name: CCM_URL value: "http://{{ .Values.global.efCxReleaseName }}-ccm-svc.{{ .Release.Namespace }}.svc:8081" - name: FILE_ENGINE_URL value: "http://{{ .Values.global.efCxReleaseName }}-file-engine-svc.{{ .Release.Namespace }}.svc:8080" - name: SCHEDULER_FIXED_RATE_IN_MS value: "60000" service: enabled: true port: 8080 portName: "http-em-co-8080" targetPort: "http-em-co-8080" ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" cert-manager.io/cluster-issuer: "ef-letsencrypt-prod" path: /email-connector(/|$)(.*) pathType: ImplementationSpecific #add the new component whatsapp-connector whatsapp-connector: enabled: true replicaCount: 1 image: repository: cim/whatsapp-connector tag: 4.9 efConnectionVars: false efEnvironmentVars: false containerPorts: - name: "http-wa-co-8080" containerPort: 8080 extraEnvVars: - name: TZ value: '{{ .Values.global.efCommonVars_TZ }}' - name: LOGGING_CONFIG value: '{{ .Values.global.efCommonVars_LOGGING_CONFIG }}' - name: CCM_URL value: "http://{{ .Values.global.efCxReleaseName }}-ccm-svc.{{ .Release.Namespace }}.svc:8081" - name: FILE_ENGINE_URL value: "http://{{ .Values.global.efCxReleaseName }}-file-engine-svc.{{ .Release.Namespace }}.svc:8080" - name: MASKING_LAYOUT_CLASS value: "com.ef.connector.whatsappconnector.utils.MaskingPatternLayout" service: enabled: true port: 8080 portName: "http-wa-co-8080" targetPort: "http-wa-co-8080" ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" #cert-manager.io/cluster-issuer: "ef-letsencrypt-prod" path: /whatsapp-connector(/|$)(.*) pathType: ImplementationSpecific extraVolumes: - name: ef-logback configMap: name: ef-logback-cm extraVolumeMounts: - name: ef-logback mountPath: /logback #add the new component ms-email-connector ms-email-connector: enabled: true replicaCount: 1 image: repository: cim/ms-exchange-email-connector tag: 4.9 efConnectionVars: true efEnvironmentVars: false containerPorts: - name: "http-ex-co-8080" containerPort: 8080 extraEnvVars: - name: TZ value: '{{ .Values.global.efCommonVars_TZ }}' - name: LOGGING_CONFIG value: '{{ .Values.global.efCommonVars_LOGGING_CONFIG }}' - name: CCM_URL value: "http://{{ .Values.global.efCxReleaseName }}-ccm-svc.{{ .Release.Namespace }}.svc:8081" - name: FILE_ENGINE_URL value: "http://{{ .Values.global.efCxReleaseName }}-file-engine-svc.{{ .Release.Namespace }}.svc:8080" - name: MASKING_LAYOUT_CLASS value: "com.ef.connector.msexchangeemailconnector.utils.MaskingPatternLayout" - name: SCHEDULER_FIXED_RATE_IN_MS value: "60000" service: enabled: true port: 8080 portName: "http-ex-co-8080" targetPort: "http-ex-co-8080" ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" #cert-manager.io/cluster-issuer: "ef-letsencrypt-prod" path: /ms-email-connector(/|$)(.*) pathType: ImplementationSpecific extraVolumes: - name: ef-logback configMap: name: ef-logback-cm extraVolumeMounts: - name: ef-logback mountPath: /logback #add the new component youtube-connector youtube-connector: enabled: true replicaCount: 1 image: repository: cim/youtube-connector tag: 4.9 efConnectionVars: true efEnvironmentVars: false containerPorts: - name: http-yt-co-8080 containerPort: 8080 extraEnvVars: - name: TZ value: "{{ .Values.global.efCommonVars_TZ }}" - name: CCM_URL value: http://{{ .Values.global.efCxReleaseName }}-ccm-svc.{{ .Release.Namespace }}.svc:8081 - name: FILE_ENGINE_URL value: http://{{ .Values.global.efCxReleaseName }}-file-engine-svc.{{ .Release.Namespace }}.svc:8080 - name: SCHEDULER_FIXED_RATE_IN_MS value: "90000" - name: DAYS_TO_KEEP_TOP_LEVEL_COMMENTS value: "7" service: enabled: true port: 8080 portName: http-yt-co-8080 targetPort: http-yt-co-8080 ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" path: /youtube-connector(/|$)(.*) pathType: ImplementationSpecific #add the new component linkedin-connector linkedin-connector: enabled: true replicaCount: 1 image: repository: project_dev/linkedinconnector tag: 4.9 efConnectionVars: false efEnvironmentVars: false containerPorts: - name: "http-li-9001" containerPort: 9001 extraEnvVars: - name: http.connect.timeout.sec value: "500000" - name: http.read.timeout.sec value: "1000000" - name: http.request.timeout.sec value: "10000000" - name: enable.ssl.env value: "false" - name: linkedin.scheduler.fixed-rate value: "600000" - name: LINKEDIN_CIM_SERVICE_ID value: "2001" - name: TZ value: '{{ .Values.global.efCommonVars_TZ }}' - name: LOGGING_CONFIG value: '{{ .Values.global.efCommonVars_LOGGING_CONFIG }}' - name: LINKEDIN_CIM_SERVICE_URL value: "http://{{ .Values.global.efCxReleaseName }}-ccm-svc.{{ .Release.Namespace }}.svc:8081" - name: MASKING_LAYOUT_CLASS value: "com.linkedin.connector.logging.MaskingPatternLayout" - name: DATABASE_URL value: jdbc:postgresql://ef-postgresql.ef-external.svc:5432/linkedinmetadata?sslmode=verify-ca&sslrootcert=/postgresql/ca.crt - name: DATABASE_USERNAME value: "sa" - name: DATABASE_PASSWORD value: "Expertflow123" service: enabled: true port: 9001 portName: "http-li-9001" targetPort: "http-li-9001" ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" #cert-manager.io/cluster-issuer: "ef-letsencrypt-prod" path: /linkedin-connector(/|$)(.*) pathType: ImplementationSpecific extraVolumes: - name: ef-logback configMap: name: ef-logback-cm - name: ef-postgresql-crt-vol secret: secretName: ef-postgresql-crt extraVolumeMounts: - name: ef-logback mountPath: /logback - name: ef-postgresql-crt-vol mountPath: /postgresql 360-Connector Tag : 4.9 Facebook-connector Tag : 4.9 Instagram-connector Tag : 4.9 SMPP-Connector Tag : 4.9 Telegram-Connector Tag : 4.9 Twilio-Connector Tag : 4.9 Twitter-Connector Tag : 4.9 Viber-Connector Tag : 4.9 Email Tag : 4.9 Exchange-Email Tag : 4.9 Youtube Tag : 4.9 Whatsapp Tag : 4.9 LinkedIn Tag : 4.9 helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" --debug cx-channels --values helm-values/cx-channels-custom-values.yaml expertflow/channels --version 4.9.0 -
Update the Agent-Desk helm chart
#Update Agent Desk helm chart Edit/update the values file helm-values/cx-agent-desk-custom-values.yaml with Unified-Agent Tag : 4.9 helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-agent-desk --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 4.9.0 -
Update the Campaigns helm chart
#Update Campaigns helm chart Edit/update the values file helm-values/cx-campaigns-custom-values.yaml with campaign-scheduler : 4.9 helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 4.9.0 -
Update Rasa-X
update the image tag external/rasa-x/values-small.yaml tag: 4.9 helm upgrade --install=true --wait=true --timeout=10m0s --debug rasa-x --namespace rasa-x --values external/rasa-x/values-small.yaml external/rasa-x -
Update Reporting Connector
helm uninstall --namespace expertflow cx-reporting If using MYSQL get the upgrade script from following path and execute on DB CX-4.9/kubernetes/pre-deployment/reportingConnector/dbScripts/dbupdate/historical_reports_db_update_script_MYSQL.sql.sql If using MSSQL get the upgrade script from following path and execute on DB CX-4.9/kubernetes/pre-deployment/reportingConnector/dbScripts/dbupdate/historical_reports_db_update_script_MSSQL.sql.sql helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-reporting --debug --values helm-values/cx-reporting-scheduler-custom-values.yaml expertflow/reporting --version 4.9.0 -
Clamav Deployment
helm show values expertflow/clamav --version 4.9.0 > helm-values/cx-clamav-values.yaml #Edit/update the values file helm-values/cx-clamav-values.yaml with global: ingressRouter: <DEFAULT-FQDN> helm upgrade --install --namespace ef-external --set global.efCxReleaseName="ef-cx" cx-clamav --debug --values helm-values/cx-clamav-values.yaml helm/clamav --version 4.9.0 -
Update the superset reports
-
Use the following guide https://expertflow-docs.atlassian.net/wiki/x/I4om
-
Configurations
-
LinkedIn Deployment Guide https://expertflow-docs.atlassian.net/wiki/x/rwNFNg