Upgrade Guide CX4.7 to CX4.7.1
Before upgrading, ensure that the system is idle, i.e; all agents are logged out from the AgentDesk.
Make sure the system is idle for 30 minutes, to sync the reporting data.
Update helm repo
CODEhelm repo update expertflow
Update the CX helm chart
CODE#Update Core Component helm chart Edit/update the values file helm-values/ef-cx-custom-values.yaml with Agent-Manager Tag : 4.7.1 Bot-Framework Tag : 4.7.1 Conversation-Manager Tag : 4.7.1 Customer-Widget Tag : 4.7.1 Historical-Reports Tag : 4.7.1 Routing-Engine Tag : 4.7.1 helm upgrade --install --namespace expertflow --create-namespace ef-cx --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 4.7.1
Update the Agent-Desk helm chart
CODE#Update Agent Desk helm chart Edit/update the values file helm-values/cx-agent-desk-custom-values.yaml with Unified-Agent Tag : 4.7.1 helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-agent-desk --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 4.7.1
Update the Campaigns helm chart
CODE#Update Campaigns helm chart Edit/update the values file helm-values/cx-campaigns-custom-values.yaml with Campaigns-Backend Tag : 4.7.1 Campaigns-Studio Tag : 4.7.1 helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 4.7.1
Update the Expertflow ETL helm chart
Create database ‘airflow’ in the PostgreSQL
Run the following commands to access the Postgres shell
CODEexport POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d) kubectl run ef-postgresql-client --rm --tty -i --restart='Never' --namespace ef-external --image docker.io/bitnami/postgresql:14.5.0-debian-11-r21 --env="PGPASSWORD=$POSTGRES_PASSWORD" \ --command -- psql --host ef-postgresql -U sa -d licenseManager -p 5432
Now that you are in Postgres shell, run the following to create airflow database
CODECREATE DATABASE airflow; # run command \l, to verify # run command exit, to exit the pod
Upgrade the data models via Alembic
Uninstall the previous running Expertflow ETL solution
CODEhelm -n expertflow uninstall cx-transflux
Go to the transflux directory where the
config
andhelm-values
folders are present and edit the existing filehelm-values/cx-transflux-custom-values.yaml
as followsCODE## Update the tag tag: 4.7.1 ## Remove the following environment variables from extraEnvVars - name: AIRFLOW_DB_NAME value: <your-value> - name: AIRFLOW_DB_USER value: <your-value> - name: AIRFLOW_DB_PASSWORD value: <your-value> - name: MYSQL_ROOT_PASSWORD value: <your-value> - name: MYSQL_HOST value: <your-value> - name: MYSQL_PORT value: <your-value> - name: AIRFLOW__CORE__MYSQL_SSL_CA value: /transflux/certificates/mysql_certs/ca.pem - name: AIRFLOW__CORE__MYSQL_SSL_CERT value: /transflux/certificates/mysql_certs/client-cert.pem - name: AIRFLOW__CORE__MYSQL_SSL_KEY value: /transflux/certificates/mysql_certs/client-key.pem ## Update the following environment variable from extraEnvVars ## You can find your POSTGRES_PASSWORD from the following commands ## export POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d) ## echo $POSTGRES_PASSWORD - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN value: postgresql+psycopg2://sa:<POSTGRES_PASSWORD>@ef-postgresql.ef-external.svc:5432/airflow?sslmode=verify-ca&sslrootcert=/postgresql/ca.crt ## Add the following environment variables in extraEnvVars ## This variable is required for users who already have data available in MYSQL database, for users who are moving towards MSSQL database, they can leave the value as it is - name: ALEMBIC_DB_URL ## For without SSL enabled target databases (can be found in config/forms_data_pipeline_config.yaml), use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade value: "mysql+pymysql://<db_username>:<db_password>@<db_host>:<db_port>/<database_name>" ## For SSL enabled target databases, use the following connection string, make sure the certs are present in the respective directory # value: "mysql+pymysql://<db_username>:<db_password>@<db_host>:<db_port>/<database_name>?ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false" ## Add the following in extraVolumes - name: ef-postgresql-crt-vol secret: secretName: ef-postgresql-crt ## Add the following in extraVolumeMounts - name: ef-postgresql-crt-vol mountPath: /postgresql
Deploy the Expertflow ETL solution with your dedicated
FQDN
CODEhelm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" --set global.ingressRouter=<FQDN> cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux --version 4.7.1
For those customers who are already using MYSQL and have their data available in MYSQL database should follow the below steps for schema upgrade
Once the solution is deployed initially, the pipelines will be turned off. Keep them off until you follow the below step
Once the transflux pod is up, SSH into that pod and run the following commands for schema migration
CODEk get pods -n expertflow | grep transflux kubectl exec -it <podname-from-above-command> -n expertflow -- /bin/bash cd /opt/airflow alembic upgrade head
Once the schema migration is completed, exit from the pod and turn on the pipelines from the deployed
FQDN
For those customers who are moving towards MSSQL target database should follow the below steps
Pause the pipeline from the dedicated
FQDN
if they are runningDelete the existing ConfigMap using
k delete cm ef-transflux-config-cm -n expertflow
Go to the transflux directory where the
config
andhelm-values
folders are present and edit the existing config files inconfig/
directory
target:
type: "mssql"
db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
configdb:
type: "mssql"
db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
Create the ConfigMap to reflect the changes, make sure you are in transflux directory
kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config
Run the pipeline from the dedicated
FQDN
and verify the data in the MSSQL target database