EF Data Platform Deployment
Prerequisites for EF Data Platform Deployment
Fully Qualified Domain Name (FQDN)
A dedicated FQDN is required for CX Transflux (EF Data Platform) to ensure proper routing and secure communication.Database Setup
Create the target database in your Database Management System by executing the following script:
CODEkubernetes/pre-deployment/reportingConnector/dbScripts/dbcreation/_historical_reports_db_creation_script_MySQL.sql
Ensure that the executing user has sufficient privileges to create databases and tables.
Follow this guide to create database
Resource Requirements:
Minimum CPU: 2 cores
Minimum Memory: 4 GB RAM
These resources are essential for optimal performance of the CX Transflux (EF Data Platform) components during data processing and ETL operations.
Deployment
Clone CX-Transflux Repository
git clone -b 4.10 https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/transflux.git transflux
cd transflux
Add the Expertflow Helm charts repository.
helm repo add expertflow https://expertflow.github.io/charts
Update the charts repository
helm repo update expertflow
Create a folder to save the Helm chart’s values
mkdir helm-values
Customise the deployment by creating the custom-values.yaml
file and add the custom configurations as per the requirements.
vi helm-values/cx-transflux-custom-values.yaml
Use the following command to see the default values.yaml
helm show values expertflow/transflux --version 4.10.0
Open the file helm-values/cx-transflux-custom-values.yaml and edit it according to the given information, which is required for the CX Transflux to work properly.
The airflow
metadata database is already created when PostgreSQL is deployed
Value | Updated Value |
---|---|
ingressRouter | Dedicated Fully Qualified Domain Name (FQDN) |
tag |
|
MONGODB_PASSWORD | Update the local MongoDB password, when using non-default password |
AIRFLOW__CORE__SQL_ALCHEMY_CONN | Update the local PostgreSQL password, when using non-default password as below |
To configure CX-Transflux, update the configuration files with your specific values. Open the following config files in /kubernetes/transflux/config
directory and ensure the required information is correctly set:
In case of MSSQL Target database
target:
type: "mssql"
db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
configdb:
type: "mssql"
db_url: "mssql+pyodbc://<your-db-username>:<password>@<host>:<port>/<mssql-db-name>?driver=ODBC+Driver+17+for+SQL+Server"
In case of MySql Target database
target:
type: "mysql"
db_url: "mysql+pymysql://<your-db-username>:<password>@<host>:<port>/<mysql-db-name>"
configdb:
type: "mysql"
db_url: "mysql+pymysql://<your-db-username>:<password>@<host>:<port>/<mysql-db-name>"
Create a directory for TLS certificates
mkdir -p certificates
Copy the mogoDB certs in certificates/mongo_certs
For non-SSL target database, the certificates/mysql_certs
directory will remain empty but still the secrets will be created as per the given command
place the MySQL/MSSQL certs in the following directory
certificates/mysql_certs
Place all certificate files in the certificates/mysql_certs
directory and create a ConfigMap for MySQL certificates to enable TLS encryption. The certificates should include the following files:
ca.pem
client-cert.pem
client-key.pem
kubectl -n expertflow create secret generic ef-transflux-mysql-certs-secret --from-file=certificates/mysql_certs
Create configuration ConfigMaps for CX-Transflux pipelines.
kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config
Finally, deploy CX-Transflux.
helm upgrade --install --namespace expertflow --set global.efCxReleaseName="ef-cx" cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux --version 4.10.0
Follow the User Manual to control pipelines from Data Platform https://expertflow-docs.atlassian.net/wiki/x/QoA_SQ