Breadcrumbs

Upgrade guide for Data Platform

Pause all the pipeline from the Data Platform before proceeding towards upgrade

  1. Update the charts repo

    helm repo update expertflow
    
  2. Clone the transflux repository

git clone -b 5.0 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/transflux.git $HOME/CX-5.0/transflux
  1. Take the backup of the existing configs from transflux/config into transflux/config_backup

  2. Delete the configs from transflux/config

  3. Copy the following in your existing deployment configuration folder as per following

    1. From folder CX-5.0/transflux/config/* To transflux/config

    2. Replace the existing file transflux/dbt_schema/agent_state_summary_gold.yml with CX-5.0/transflux/dbt_schema/agent_state_summary_gold.yml

  4. Edit the newly copied file transflux/config/tenants.yaml and configure the following

The below mention information can be found from existing configuration backup file transflux/config_backup/keycloak_users_data_pipeline_config.yaml

a. By default, the single tenant’s name is expertflow

b. FQDN_URL in api

c. TARGET_TYPE (mysql or mssql) as per the dedicated target database

d. For selected target type, update the dedicated MYSQL or MSSQL credentials

tenants:
  expertflow:        
    
    mongodb:

      SOURCE_HOST: "mongo-mongodb.ef-external.svc"
      SOURCE_PORT: "27017"
      SOURCE_USERNAME: "root"
      SOURCE_PASSWORD: "Expertflow123"
      SOURCE_TLS_ENABLED: true
      SOURCE_DATABASE: "expertflow"

    postgre:

      SOURCE_HOST: "ef-postgresql.ef-external.svc"
      SOURCE_PORT: "5432"
      SOURCE_USERNAME: "sa"
      SOURCE_PASSWORD: "Expertflow123"
      SOURCE_DATABASE: "qm_db"
    
    api:

      FQDN_URL: "<FQDN URL>"
      REALM: "expertflow"      

    TARGET_TYPE: "mysql"

    MYSQL:
      TARGET_HOST: "192.168.2.18"
      TARGET_PORT: "3306"
      TARGET_USERNAME: "monty"
      TARGET_PASSWORD: "Expertflow#143"
      TARGET_SSL_ENABLED: false
      TARGET_DATABASE: "hold_db"

    MSSQL:
      TARGET_HOST: "192.168.1.77"
      TARGET_PORT: "1433"
      TARGET_USERNAME: "sa"
      TARGET_PASSWORD: "Expertflow464"
      TARGET_SSL_ENABLED: false
      TARGET_DATABASE: "hold_db"
  1. To configure cisco sync job, edit the file transflux/config/qm_cisco_team_sync_config.yaml and update the endpoint with FQDN on which cisco sync job is configured

    endpoint: "https://{FQDN}/cisco-sync-service/api/v1/sync"
    
  2. Delete the existing config maps and re-create them from transflux directory

# Delete existing config maps

k delete cm ef-transflux-config-cm -n expertflow

k delete cm ef-transflux-dbt-schema-cm -n expertflow

# Re-create the config maps

kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config

kubectl -n expertflow create configmap ef-transflux-dbt-schema-cm --from-file=dbt_schema
  1. Edit the file helm-values/cx-transflux-custom-values.yaml in the transflux directory

    image:
      tag: 5.0
    
  2. Redeploy the solution

helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml  expertflow/transflux --version 5.0
  1. Access the Data Platform UI. Turn on the data migration pipeline Single_tenant_migration_5.0 from the slider (on the left) and trigger it from the play button

  2. Once the migration is done, turn on the other dedicated pipelines from the UI

  3. Also delete all old component databases from the MongoDB manually after migration