Pause all the pipeline from the Data Platform before proceeding towards upgrade
-
Update the charts repo
helm repo update expertflow -
Clone the transflux repository
git clone -b 5.0 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/transflux.git $HOME/CX-5.0/transflux
-
Take the backup of the existing configs from
transflux/configintotransflux/config_backup -
Delete the configs from
transflux/config -
Copy the following in your existing deployment configuration folder as per following
-
From folder
CX-5.0/transflux/config/* To transflux/config -
Replace the existing file
transflux/dbt_schema/agent_state_summary_gold.ymlwithCX-5.0/transflux/dbt_schema/agent_state_summary_gold.yml
-
-
Edit the newly copied file
transflux/config/tenants.yamland configure the following
The below mention information can be found from existing configuration backup file transflux/config_backup/keycloak_users_data_pipeline_config.yaml
a. By default, the single tenant’s name is expertflow
b. FQDN_URL in api
c. TARGET_TYPE (mysql or mssql) as per the dedicated target database
d. For selected target type, update the dedicated MYSQL or MSSQL credentials
tenants:
expertflow:
mongodb:
SOURCE_HOST: "mongo-mongodb.ef-external.svc"
SOURCE_PORT: "27017"
SOURCE_USERNAME: "root"
SOURCE_PASSWORD: "Expertflow123"
SOURCE_TLS_ENABLED: true
SOURCE_DATABASE: "expertflow"
postgre:
SOURCE_HOST: "ef-postgresql.ef-external.svc"
SOURCE_PORT: "5432"
SOURCE_USERNAME: "sa"
SOURCE_PASSWORD: "Expertflow123"
SOURCE_DATABASE: "qm_db"
api:
FQDN_URL: "<FQDN URL>"
REALM: "expertflow"
TARGET_TYPE: "mysql"
MYSQL:
TARGET_HOST: "192.168.2.18"
TARGET_PORT: "3306"
TARGET_USERNAME: "monty"
TARGET_PASSWORD: "Expertflow#143"
TARGET_SSL_ENABLED: false
TARGET_DATABASE: "hold_db"
MSSQL:
TARGET_HOST: "192.168.1.77"
TARGET_PORT: "1433"
TARGET_USERNAME: "sa"
TARGET_PASSWORD: "Expertflow464"
TARGET_SSL_ENABLED: false
TARGET_DATABASE: "hold_db"
-
To configure cisco sync job, edit the file
transflux/config/qm_cisco_team_sync_config.yamland update the endpoint with FQDN on which cisco sync job is configuredendpoint: "https://{FQDN}/cisco-sync-service/api/v1/sync" -
Delete the existing config maps and re-create them from
transfluxdirectory
# Delete existing config maps
k delete cm ef-transflux-config-cm -n expertflow
k delete cm ef-transflux-dbt-schema-cm -n expertflow
# Re-create the config maps
kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config
kubectl -n expertflow create configmap ef-transflux-dbt-schema-cm --from-file=dbt_schema
-
Edit the file
helm-values/cx-transflux-custom-values.yamlin thetransfluxdirectoryimage: tag: 5.0 -
Redeploy the solution
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux --version 5.0
-
Access the Data Platform UI. Turn on the data migration pipeline
Single_tenant_migration_5.0from the slider (on the left) and trigger it from the play button -
Once the migration is done, turn on the other dedicated pipelines from the UI
-
Also delete all old component databases from the MongoDB manually after migration