Alembic Schema Migration
Introduction
With the introduction of new upgrades within the EFCX, the object level changes are observed that need to be catered from the reporting database as well, and to deal with that we leverage alembic migration for schema updates. Alembic migrations allow us to keep track of the schema upgrades that are being made within the specific database via version control, so that at any specific point of time, we can recognize what changes are made.
Pre-requisite
Expertflow ETL Data Platform deployed
Version upgrade id (version_id)
Version id’s
Version upgrade id to 4.10:
175acabfff54
Step to proceed
Note that alembic migrations are only made with previous data schemas available within the reporting target databases. For fresh deployments, there is no need for alembic migrations as tables are created with updated schema definitions
Pause all pipelines from the dedicated
FQDN
of Expertflow ETL Data PlatformEdit the file
helm-values/cx-transflux-custom-values.yaml
within the transflux directory and update the following environment variableCODE# Update the connection string as per the dedicated target database name: ALEMBIC_DB_URL ## For without SSL enabled target databases, use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade value: "mysql+pymysql://monty:Expertflow#143@192.168.2.18:3306/hold_db" ## For SSL enabled target databases, use the following connection string, make sure to copy the certs in the respective directory # value: "mysql+pymysql://user:Expertflow123@192.168.2.202:3306/hold_db?ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false"
In case of MSSQL
CODE# Update the connection string as per the dedicated target database name: ALEMBIC_DB_URL ## For without SSL enabled target databases, use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade value: "mssql+pyodbc://sa:Expertflow464@192.168.1.77:1433/hold_db?driver=ODBC+Driver+17+for+SQL+Server" ## For SSL enabled target databases, use the following connection string, make sure to copy the certs in the respective directory # value: "mssql+pyodbc://sa:Expertflow464@192.168.1.77:1433/hold_db?driver=ODBC+Driver+17+for+SQL+Server&ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false"
Re-deploy the
cx-transflux
using command within the transflux directoryCODEhelm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux
Once the transflux pod is up, SSH into that pod and run the following commands for schema migration
CODEk get pods -n expertflow | grep transflux kubectl exec -it <podname-from-above-command> -n expertflow -- /bin/bash cd /opt/airflow alembic upgrade <version_id>
Once the schema migration is completed, exit from the pod and turn on the pipelines from the deployed
FQDN