Skip to main content
Skip table of contents

Upgrade Guide CX4.7 to CX4.7.1

Before upgrading, ensure that the system is idle, i.e; all agents are logged out from the AgentDesk.
Make sure the system is idle for 30 minutes, to sync the reporting data.

  1. Update helm repo

    CODE
    helm repo update expertflow
  2. Update the CX helm chart

    CODE
    #Update Core Component helm chart
    Edit/update the values file helm-values/ef-cx-custom-values.yaml with
    
    Agent-Manager Tag : 4.7.1
    Bot-Framework Tag : 4.7.1
    Conversation-Manager  Tag : 4.7.1
    Customer-Widget Tag : 4.7.1
    Historical-Reports Tag : 4.7.1
    Routing-Engine Tag : 4.7.1
    
    helm upgrade --install --namespace expertflow --create-namespace   ef-cx  --debug --values helm-values/ef-cx-custom-values.yaml expertflow/cx --version 4.7.1
  3. Update the Agent-Desk helm chart

    CODE
    #Update Agent Desk helm chart
    Edit/update the values file helm-values/cx-agent-desk-custom-values.yaml with
    Unified-Agent Tag : 4.7.1
    
    helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-agent-desk  --debug --values helm-values/cx-agent-desk-custom-values.yaml expertflow/agent-desk --version 4.7.1
  4. Update the Campaigns helm chart

    CODE
    #Update Campaigns helm chart
    Edit/update the values file helm-values/cx-campaigns-custom-values.yaml with
    Campaigns-Backend Tag : 4.7.1
    Campaigns-Studio Tag : 4.7.1
    
    helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-campaigns --debug --values helm-values/cx-campaigns-custom-values.yaml expertflow/campaigns --version 4.7.1 
  5. Update the Expertflow ETL helm chart

    1. Create database ‘airflow’ in the PostgreSQL

      1. Run the following commands to access the Postgres shell

        CODE
        export POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d)
        
        kubectl run ef-postgresql-client --rm --tty -i --restart='Never' --namespace ef-external --image docker.io/bitnami/postgresql:14.5.0-debian-11-r21 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
              --command -- psql --host ef-postgresql -U sa -d licenseManager -p 5432
      2. Now that you are in Postgres shell, run the following to create airflow database

        CODE
        CREATE DATABASE airflow;
        
        # run command \l, to verify
        # run command exit, to exit the pod
    2. Upgrade the data models via Alembic

      1. Uninstall the previous running Expertflow ETL solution

        CODE
        helm -n expertflow uninstall cx-transflux
      2. Go to the transflux directory where the config and helm-values folders are present and edit the existing file helm-values/cx-transflux-custom-values.yaml as follows

        CODE
        ## Update the tag
        tag: 4.7.1
        
        ## Remove the following environment variables from extraEnvVars
        
          - name: AIRFLOW_DB_NAME
            value: <your-value>
          - name: AIRFLOW_DB_USER
            value: <your-value>
          - name: AIRFLOW_DB_PASSWORD
            value: <your-value>
          - name: MYSQL_ROOT_PASSWORD
            value: <your-value>
          - name: MYSQL_HOST
            value: <your-value>
          - name: MYSQL_PORT
            value: <your-value>
          - name: AIRFLOW__CORE__MYSQL_SSL_CA
            value: /transflux/certificates/mysql_certs/ca.pem
          - name: AIRFLOW__CORE__MYSQL_SSL_CERT
            value: /transflux/certificates/mysql_certs/client-cert.pem
          - name: AIRFLOW__CORE__MYSQL_SSL_KEY
            value: /transflux/certificates/mysql_certs/client-key.pem 
        
        ## Update the following environment variable from extraEnvVars
        ## You can find your POSTGRES_PASSWORD from the following commands
        ## export POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d)
        ## echo $POSTGRES_PASSWORD
        
        - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
            value: postgresql+psycopg2://sa:<POSTGRES_PASSWORD>@ef-postgresql.ef-external.svc:5432/airflow?sslmode=verify-ca&sslrootcert=/postgresql/ca.crt
            
        ## Add the following environment variables in extraEnvVars
        ## This variable is required for users who already have data available in MYSQL database, for users who are moving towards MSSQL database, they can leave the value as it is
        
        - name: ALEMBIC_DB_URL
            ## For without SSL enabled target databases (can be found in config/forms_data_pipeline_config.yaml), use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade
            value: "mysql+pymysql://<db_username>:<db_password>@<db_host>:<db_port>/<database_name>"
            
            ## For SSL enabled target databases, use the following connection string, make sure the certs are present in the respective directory
            # value: "mysql+pymysql://<db_username>:<db_password>@<db_host>:<db_port>/<database_name>?ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false"
          
        ## Add the following in extraVolumes
        
           - name: ef-postgresql-crt-vol
             secret:
               secretName: ef-postgresql-crt
               
        ## Add the following in extraVolumeMounts
        
           - name: ef-postgresql-crt-vol
             mountPath: /postgresql
      3. Deploy the Expertflow ETL solution with your dedicated FQDN

        CODE
        helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx" --set global.ingressRouter=<FQDN>  cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml  expertflow/transflux --version 4.7.1 

For those customers who are already using MYSQL and have their data available in MYSQL database should follow the below steps for schema upgrade

  1. Once the solution is deployed initially, the pipelines will be turned off. Keep them off until you follow the below step

  2. Once the transflux pod is up, SSH into that pod and run the following commands for schema migration

    CODE
    k get pods -n expertflow | grep transflux
    
    kubectl exec -it <podname-from-above-command> -n expertflow -- /bin/bash
    
    cd /opt/airflow
    
    alembic upgrade head
  3. Once the schema migration is completed, exit from the pod and turn on the pipelines from the deployed FQDN

For those customers who are moving towards MSSQL target database should follow the below steps

  1. Pause the pipeline from the dedicated FQDN if they are running

  2. Delete the existing ConfigMap using

CODE
k delete cm ef-transflux-config-cm -n expertflow
  1. Go to the transflux directory where the config and helm-values folders are present and edit the existing config files in config/ directory

CODE
target:
  type: "mssql"
  db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
configdb:
  type: "mssql"
  db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
  1. Create the ConfigMap to reflect the changes, make sure you are in transflux directory

CODE
kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config
  1. Run the pipeline from the dedicated FQDN and verify the data in the MSSQL target database

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.