EF Data Platform Deployment 4.4.10 to 4.10
Prerequisites for EF Data Platform Deployment
Fully Qualified Domain Name (FQDN)
A dedicated FQDN is required for CX Transflux (EF Data Platform) to ensure proper routing and secure communication.Database Setup
Create the target database in your Database Management System by executing the following script:
CODEkubernetes/pre-deployment/reportingConnector/dbScripts/dbcreation/_historical_reports_db_creation_script_MySQL.sqlEnsure that the executing user has sufficient privileges to create databases and tables.
Follow this guide to create database
Resource Requirements:
Minimum CPU: 2 cores
Minimum Memory: 4 GB RAM
These resources are essential for optimal performance of the CX Transflux (EF Data Platform) components during data processing and ETL operations.
Deployment
Clone CX-Transflux Repository
git clone -b 4.10_f-fnb https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/transflux.git transflux
cd transflux
Add the Expertflow Helm charts repository.
helm repo add expertflow https://expertflow.github.io/charts
Update the charts repository
helm repo update expertflow
Create Namespace
kubectl create ns transflux
Create a Configmap for efconnectionvar
vi ef-cx-efconnections.yaml
apiVersion: v1
data:
ACTIVEMQ_CA_CERT: /activemq/ca.crt
ACTIVEMQ_CLIENT_CERT: /activemq/tls.crt
ACTIVEMQ_CLIENT_KEY: /activemq/tls.key
ACTIVEMQ_KEY_STORE_PASSWORD: Expertflow123
ACTIVEMQ_KEY_STORE_PATH: activemq_keystore.p12
ACTIVEMQ_MAX_RECONNECT_ATTEMPTS: "-1"
ACTIVEMQ_OPEN_WIRE_PORT: "61617"
ACTIVEMQ_PASSWORD: RXhwZXJ0ZmxvdzQ2NA
ACTIVEMQ_PRIMARY_URL: activemq.ef-external.svc
ACTIVEMQ_PRIORITY_BACKUP: "true"
ACTIVEMQ_RANDOMIZE: "false"
ACTIVEMQ_SECONDARY_URL: activemq.ef-external.svc
ACTIVEMQ_STOMP_PORT: "61615"
ACTIVEMQ_TIMEOUT: "5000"
ACTIVEMQ_TRANSPORT: ssl
ACTIVEMQ_TRUST_STORE_PASSWORD: Expertflow123
ACTIVEMQ_TRUST_STORE_PATH: activemq_truststore.p12
ACTIVEMQ_USERNAME: admin
KEY_STORE_PASSWORD: Expertflow123
KEYCLOAK_BEARER_ONLY: "true"
KEYCLOAK_CLIENT_DB_ID: ef61df80-061c-4c29-b9ac-387e6bf67052
KEYCLOAK_CLIENT_ID: cim
KEYCLOAK_CONFIDENTIAL_PORT: "0"
KEYCLOAK_CREDENTIALS: '{"secret": "ef61df80-061c-4c29-b9ac-387e6bf67052"}'
KEYCLOAK_GRANT_TYPE: password
KEYCLOAK_GRANT_TYPE_PAT: client_credentials
KEYCLOAK_HOST: http://keycloak.ef-external.svc/auth/
KEYCLOAK_PASSWORD_ADMIN: admin
KEYCLOAK_POLICY_ENFORCER: '{}'
KEYCLOAK_REALM: expertflow
KEYCLOAK_RESOURCE: cim
KEYCLOAK_SCOPE_NAME: Any default scope
KEYCLOAK_SSL_REQUIRED: external
KEYCLOAK_USE_RESOURCE_ROLE_MAPPINGS: "true"
KEYCLOAK_USERNAME_ADMIN: admin
KEYCLOAK_VERIFY_TOKEN_AUDIENCE: "false"
MONGODB_AUTHENTICATION_DATABASE: admin
MONGODB_CA_CERT: /mongo/mongodb-ca-cert
MONGODB_CERTIFICATE_PATH: https_things/cert.pem
MONGODB_CLIENT_CERT: /mongo/client-pem
MONGODB_ENABLE_SSL: "true"
MONGODB_HOST: mongo-mongodb.ef-external.svc.cluster.local
MONGODB_KEEP_ALIVE_TIME: "3000"
MONGODB_PASSWORD: Expertflow123
MONGODB_READ_PREFERENCE: secondaryPreferred
MONGODB_RECONNECT_INTERVAL: "500"
MONGODB_REPLICASET: expertflow
MONGODB_REPLICASET_ENABLED: "false"
MONGODB_USERNAME: root
REDIS_CA_CERT: /redis/ca.crt
REDIS_CLIENT_CERT: /redis/tls.crt
REDIS_CLIENT_KEY: /redis/tls.key
REDIS_CONNECT_TIMEOUT: "300"
REDIS_HOST: redis-master.ef-external.svc
REDIS_MAX_ACTIVE: "50"
REDIS_MAX_IDLE: "50"
REDIS_MAX_WAIT: "-1"
REDIS_MIN_IDLE: "25"
REDIS_PASSWORD: Expertflow123
REDIS_PORT: "6379"
REDIS_SENTINEL_ENABLE: "false"
REDIS_SENTINEL_MASTER: expertflow
REDIS_SENTINEL_NODES: redis-ha-node-0.redis-ha-headless.ef-external.svc.cluster.local:26379,redis-ha-node-1.redis-ha-headless.ef-external.svc.cluster.local:26379,redis-ha-node-2.redis-ha-headless.ef-external.svc.cluster.local:26379
REDIS_SENTINEL_PASSWORD: Expertflow123
REDIS_SSL_ENABLED: "true"
REDIS_TIMEOUT: "5000"
TRUST_STORE_PASSWORD: Expertflow123
VAULT_CA_CERT: /tls-ca/tls.crt
VAULT_CLIENT_CERT: /tls-server-client/tls.crt
VAULT_CLIENT_KEY: /tls-server-client/tls.key
VAULT_URI: https://vault.vault.svc.cluster.local:8200
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: ef-cx
meta.helm.sh/release-namespace: expertflow
labels:
app.kubernetes.io/managed-by: Helm
name: ef-cx-efconnections-cm
namespace: transflux
Please change Mongodb environment variables as per your environment in the above Configmap.
kubectl apply -f ef-cx-efconnections.yaml
Apply Image Pull secret
Run the following commands for applying ImagePullSecrets of Transflux.
vi ef-imagePullSecret-transflux.yaml
apiVersion: v1
kind: Secret
metadata:
name: expertflow-reg-cred
namespace: transflux
data:
.dockerconfigjson: ewogICAgICAgICJhdXRocyI6IHsKICAgICAgICAgICAgICAgICJnaXRpbWFnZXMuZXhwZXJ0Zmxvdy5jb20iOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICJhdXRoIjogIlpXWmplRHBTWldOU2NITjFTRE0wZVhGd05UWlpVa1pWWWc9PSIKICAgICAgICAgICAgICAgIH0KICAgICAgICB9Cn0K
type: kubernetes.io/dockerconfigjson
kubectl apply -f ef-imagePullSecret-transflux.yaml
Add the Expertflow Helm charts repository.
Transfer the Mongo, Redis, PostgreSQL and ActiveMQ Certificates from the ef-external namespace
kubectl get secret mongo-mongodb-ca -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: transflux/' | kubectl create -f -
kubectl get secret ef-postgresql-crt -n ef-external -o yaml | sed 's/namespace: ef-external/namespace: transflux/' | kubectl create -f -
Create a folder to save the Helm chart’s values
mkdir helm-values
Customise the deployment by creating the custom-values.yaml file and add the custom configurations as per the requirements.
vi helm-values/cx-transflux-custom-values.yaml
Use the following command to see the default values.yaml
helm show values expertflow/transflux --version 4.10.0
Open the file helm-values/cx-transflux-custom-values.yaml and edit it according to the given information, which is required for the CX Transflux to work properly.
The airflow metadata database is already created when PostgreSQL is deployed
Make sure to add the below values in the cx-transflux-custom-values.yaml with proper structure as per the above show values command
Value | Updated Value |
|---|---|
ingressRouter | Dedicated Fully Qualified Domain Name (FQDN) |
tag |
|
MONGODB_PASSWORD | Update the local MongoDB password, when using non-default password |
AIRFLOW__CORE__SQL_ALCHEMY_CONN | postgresql+psycopg2://sa:Expertflow123@ef-postgresql.ef-external.svc:5432/airflow |
imagePullSecrets | expertflow-reg-cred |
To configure CX-Transflux, update the configuration files with your specific values. Open the following config files in /kubernetes/transflux/config directory and ensure the required information is correctly set
In case of MySQL Target database
target:
type: "mysql"
db_url: "mysql+pymysql://<your-db-username>:<password>@<host>:<port>/<mysql-db-name>"
configdb:
type: "mysql"
db_url: "mysql+pymysql://<your-db-username>:<password>@<host>:<port>/<mysql-db-name>"
Use the following to export TLS certificates from MongoDB pod. The certificates will be exported to /tmp/mongodb_certs
mkdir /tmp/mongodb_certs
CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
for f in ${CERTFILES[*]}; do kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done
Create a directory for TLS certificates
mkdir -p certificates/mongo_certs
mkdir -p certificates/mysql_certs
cp -r /tmp/mongo_certs certificates/mongo_certs
For non-SSL target database, the certificates/mysql_certs directory will remain empty but still the secrets will be created as per the given command
place the MySQL/MSSQL certs in the following directory
certificates/mysql_certs
Place all certificate files in the certificates/mysql_certs directory and create a ConfigMap for MySQL certificates to enable TLS encryption. The certificates should include the following files:
ca.pemclient-cert.pemclient-key.pem
kubectl -n transflux create secret generic ef-transflux-mysql-certs-secret --from-file=certificates/mysql_certs
Create configuration ConfigMaps for CX-Transflux pipelines.
kubectl -n transflux create configmap ef-transflux-config-cm --from-file=config
Create database ‘airflow’ in the PostgreSQL
Run the following commands to access the Postgres shell
CODEexport POSTGRES_PASSWORD=$(kubectl get secret --namespace ef-external ef-postgresql -o jsonpath="{.data.password}" | base64 -d) kubectl run ef-postgresql-client --rm --tty -i --restart='Never' --namespace ef-external --image docker.io/bitnami/postgresql:14.5.0-debian-11-r21 --env="PGPASSWORD=$POSTGRES_PASSWORD" \ --command -- psql --host ef-postgresql -U sa -d licenseManager -p 5432Now that you are in Postgres shell, run the following to create airflow database
CODECREATE DATABASE airflow; # run command \l, to verify # run command exit, to exit the pod
Finally, deploy CX-Transflux.
helm upgrade --install --namespace transflux --set global.efCxReleaseName="ef-cx" cx-transflux --debug --values helm-values/cx-transflux-custom-values.yaml expertflow/transflux --version 4.10.0