Skip to main content
Skip table of contents

Alembic Schema Migration

Introduction

With the introduction of new upgrades within the EFCX, the object level changes are observed that need to be catered from the reporting database as well, and to deal with that we leverage alembic migration for schema updates. Alembic migrations allow us to keep track of the schema upgrades that are being made within the specific database via version control, so that at any specific point of time, we can recognize what changes are made.

Pre-requisite

Version id’s

  1. Version upgrade id to 4.10: 175acabfff54

Step to proceed

Note that alembic migrations are only made with previous data schemas available within the reporting target databases. For fresh deployments, there is no need for alembic migrations as tables are created with updated schema definitions

  1. Pause all pipelines from the dedicated FQDN of Expertflow ETL Data Platform

  2. Edit the file helm-values/cx-transflux-custom-values.yaml within the transflux directory and update the following environment variable

    CODE
    # Update the connection string as per the dedicated target database
    
     name: ALEMBIC_DB_URL
        ## For without SSL enabled target databases, use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade
        value: "mysql+pymysql://monty:Expertflow#143@192.168.2.18:3306/hold_db"
        ## For SSL enabled target databases, use the following connection string, make sure to copy the certs in the respective directory
        # value: "mysql+pymysql://user:Expertflow123@192.168.2.202:3306/hold_db?ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false"
    
    1. In case of MSSQL

      CODE
      # Update the connection string as per the dedicated target database
      
       name: ALEMBIC_DB_URL
          ## For without SSL enabled target databases, use the following connection string, make sure that database name is same where the tables are present for upgrade/downgrade
          value: "mssql+pyodbc://sa:Expertflow464@192.168.1.77:1433/hold_db?driver=ODBC+Driver+17+for+SQL+Server"
          ## For SSL enabled target databases, use the following connection string, make sure to copy the certs in the respective directory
          # value: "mssql+pyodbc://sa:Expertflow464@192.168.1.77:1433/hold_db?driver=ODBC+Driver+17+for+SQL+Server&ssl_ca=/transflux/certificates/mysql_certs/ca.pem&ssl_cert=/transflux/certificates/mysql_certs/client-cert.pem&ssl_key=/transflux/certificates/mysql_certs/client-key.pem&ssl_verify_cert=false"
      
  3. Re-deploy the cx-transflux using command within the transflux directory

    CODE
    helm upgrade --install --namespace expertflow   --set global.efCxReleaseName="ef-cx"  cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml  expertflow/transflux
  4. Once the transflux pod is up, SSH into that pod and run the following commands for schema migration

    CODE
    k get pods -n expertflow | grep transflux
    
    kubectl exec -it <podname-from-above-command> -n expertflow -- /bin/bash
    
    cd /opt/airflow
    
    alembic upgrade <version_id>
  5. Once the schema migration is completed, exit from the pod and turn on the pipelines from the deployed FQDN

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.