HC Migration Utility Deployment Guide
his document illustrates the procedure and steps to deploy HC migration utility on Kubernetes.
Before you begin, verify
- Installed Kubernetes. If not, see Deployment Planning
- Have already setup storage. If not yet, see Storage Solution - Getting Started
Migration utility requires a backup of MongoDB data.
Prepare HC Migration Utility for Deployment
Step 1: Backup & Restore MongoDB
Backing up chatsolution
database in HC
Exec into the MongoDB container in HC host
BASHdocker exec -it expertflow_mongo_1 sh
Create the backup directory in
/
directory in the containerBASHmkdir /backup
Export the compressed backup in
/backup
directoryBASHmongodump --db=chatsolution --archive=/backup/chatsolution.tar.gz --gzip
Verify that
chatsolution.tar.gz
exists in the/backup
directoryBASHls -la /backup/
Exit out of the container
BASHexit
Copy the
chatsolution.tar.gz
from the container to the host vmBASHdocker cp expertflow_mongo_1:/backup/chatsolution.tar.gz chatsolution.tar.gz
Copy the
chatsolution.tar.gz
to the CX host for restoring it.
Restoring chatsolution
database in CX
export MongoDB certificates:-
mkdir /tmp/mongodb_certs
CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
for f in ${CERTFILES[*]}; do kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done
run the MongoDB client pod using the following command:-
kubectl run --namespace ef-external mongo-mongodb-client --image docker.io/bitnami/mongodb:6.0.2-debian-11-r1 --command -- sleep infinity
copy the MongoDB certificates to client pod:-
kubectl -n ef-external cp /tmp/mongodb_certs mongo-mongodb-client:/tmp/
Run the following command to copy the backup into MongoDB client pod:-
kubectl -n ef-external cp <backup-filename> mongo-mongodb-client:/tmp/
Exec into MongoDB client pod:-
kubectl exec -it -n ef-external mongo-mongodb-client -- bash
connect to MongoDB using the following command:-
mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls --tlsAllowInvalidHostnames --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem --tlsCAFile /tmp/mongodb_certs/client-pem
Once connected to MongoDB, create user and database by running the following commands:-
use chatsolution
db.createUser({
user: "chatsolution",
pwd: "chatsolution",
roles: [
{ role: "read", db: "chatsolution" }
]
});
now disconnect from the MongoDB and run the following command in MongoDB client pod to restore the database:-
mongorestore --host "mongo-mongodb.ef-external.svc.cluster.local" --db chatsolution --gzip --archive=/tmp/<backup-filename> --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --ssl --sslPEMKeyFile /tmp/mongodb_certs/client-pem --sslCAFile /tmp/mongodb_certs/mongodb-ca-cert
once the restoration is completed, you can check it by running the following command:-
mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls --tlsAllowInvalidHostnames --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem --tlsCAFile /tmp/mongodb_certs/client-pem --eval 'show dbs'
Step 2: Migrate MinIo Data
Follow these steps to migrate data from MinIO docker based container to Kubernetes container
exec into MinIO docker container and navigate to the default location inside container
docker exec -it <minio_container_name> sh
cd data/
Zip the bucket you want to migrate
zip -r <zip_directory_name> <bucket_to_zip>
Exit out of MinIO container and copy the zipped file from inside container to host vm
exit
docker cp <minio_container_name>:<zipped_file_path> <host_vm_path>
copy the zipped file to destination vm
scp <host_vm_zip_file_path> <user>@<dest_vm_ip>:<dest_vm_path>
copy the zipped file to inside pod location
kubectl cp <path_to_file> <namespace>/<pod_name>:<inside_pod_path>
unzip the file with following command
unzip <zipped_file> -d <path_to_unzip>
restart the pod with following command
CODEkubectl delete <pod_name> -n <namespace>
Step 3: Clone the repository
git clone https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/hc-migration.git
cd hc-migration/kubernetes/
Step 4: Update FQDN
HC Migration Utility should be accessible by a fully qualified domain name. Assign the FQDN.
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' ConfigMaps/* Ingresses/nginx/*
Step 5: Deploy HC Migration Utility
For Kubernetes manifests:-
Apply ConfigMap
CODEkubectl apply -f ConfigMaps/
Create services for HC Migration Utility
CODEkubectl apply -f Services/
Apply the Deployment manifest
CODEkubectl apply -f Deployments/
Before proceeding to the next steps, wait for HC Migration Utility pod to be running.
CODEkubectl get pods -n expertflow
For RKE2-based Ingresses using Ingress-Nginx Controller
CODEkubectl apply -f Ingresses/nginx/
For Helm-based Deployment:-
update the ingressRouter
value in the helm/values.yaml
file with the FQDN of the host.
for example:-
global:
ingressRouter: "devops212.ef.com"
ingressCertName: "ef-ingress-tls-secret"
ingressClassName: "nginx"
commonIngressAnnotations: {}
efCxReleaseName: "ef-cx"
Run the following command to deploy the migration utility helm chart:-
helm upgrade --install --namespace=expertflow --set global.efCxReleaseName="ef-cx" migration-utility --debug --values=helm/values.yaml helm/
Step 6: Verify Deployment
Verify the deployment on https://{FQDN}/migration-utility