Breadcrumbs

HC Migration Utility Deployment Guide

his document illustrates the procedure and steps to deploy HC migration utility on Kubernetes.

Before you begin, verify

Migration utility requires a backup of MongoDB data.

Prepare HC Migration Utility for Deployment

Step 1: Backup & Restore MongoDB

Backing up chatsolution database in HC

  1. Exec into the MongoDB container in HC host

    Bash
    docker exec -it expertflow_mongo_1 sh
    
  2. Create the backup directory in / directory in the container

    Bash
    mkdir /backup
    
  3. Export the compressed backup in /backup directory

    Bash
    mongodump --db=chatsolution --archive=/backup/chatsolution.tar.gz --gzip
    
  4. Verify that chatsolution.tar.gz exists in the /backup directory

    Bash
    ls -la /backup/
    
  5. Exit out of the container

    Bash
    exit
    
  6. Copy the chatsolution.tar.gz from the container to the host vm

    Bash
    docker cp expertflow_mongo_1:/backup/chatsolution.tar.gz chatsolution.tar.gz
    
  7. Copy the chatsolution.tar.gz to the CX host for restoring it.

Restoring chatsolution database in CX

export MongoDB certificates:-

mkdir /tmp/mongodb_certs
CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
for f in ${CERTFILES[*]}; do   kubectl get secret mongo-mongodb-ca  -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v  | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done

run the MongoDB client pod using the following command:-

kubectl run --namespace ef-external    mongo-mongodb-client  --image docker.io/bitnami/mongodb:6.0.2-debian-11-r1 --command -- sleep infinity

copy the MongoDB certificates to client pod:-

kubectl -n ef-external cp /tmp/mongodb_certs mongo-mongodb-client:/tmp/

Run the following command to copy the backup into MongoDB client pod:-

kubectl -n ef-external cp <backup-filename> mongo-mongodb-client:/tmp/

Exec into MongoDB client pod:-

kubectl exec -it -n ef-external mongo-mongodb-client -- bash

connect to MongoDB using the following command:-

mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls  --tlsAllowInvalidHostnames  --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem  --tlsCAFile /tmp/mongodb_certs/client-pem

Once connected to MongoDB, create user and database by running the following commands:-

use chatsolution
db.createUser({
  user: "chatsolution",  
  pwd: "chatsolution",  
  roles: [
    { role: "read", db: "chatsolution" }
  ]
});

now disconnect from the MongoDB and run the following command in MongoDB client pod to restore the database:-

mongorestore --host "mongo-mongodb.ef-external.svc.cluster.local" --db chatsolution --gzip --archive=/tmp/<backup-filename> --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --ssl --sslPEMKeyFile /tmp/mongodb_certs/client-pem   --sslCAFile /tmp/mongodb_certs/mongodb-ca-cert

once the restoration is completed, you can check it by running the following command:-

mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls  --tlsAllowInvalidHostnames  --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem  --tlsCAFile /tmp/mongodb_certs/client-pem --eval 'show dbs'


Step 2: Migrate MinIo Data

Follow these steps to migrate data from MinIO docker based container to Kubernetes container

  1. exec into MinIO docker container and navigate to the default location inside container

docker exec -it <minio_container_name> sh
cd data/
  1. Zip the bucket you want to migrate

zip -r <zip_directory_name> <bucket_to_zip>
  1. Exit out of MinIO container and copy the zipped file from inside container to host vm

exit
docker cp <minio_container_name>:<zipped_file_path> <host_vm_path>
  1. copy the zipped file to destination vm

scp <host_vm_zip_file_path> <user>@<dest_vm_ip>:<dest_vm_path>
  1. copy the zipped file to inside pod location

kubectl cp <path_to_file> <namespace>/<pod_name>:<inside_pod_path>
  1. unzip the file with following command

unzip <zipped_file> -d <path_to_unzip>
  1. restart the pod with following command

    kubectl delete <pod_name> -n <namespace>
    

Step 3: Clone the repository

git clone https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/hc-migration.git
cd hc-migration/kubernetes/

Step 4: Update FQDN

HC Migration Utility should be accessible by a fully qualified domain name. Assign the FQDN.

sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' ConfigMaps/*  Ingresses/nginx/*

Step 5: Deploy HC Migration Utility

For Kubernetes manifests:-

  1. Apply ConfigMap

    kubectl apply -f ConfigMaps/
    
  2. Create services for HC Migration Utility

    kubectl apply -f Services/
    
  3. Apply the Deployment manifest 

    kubectl apply -f Deployments/
    
  4. Before proceeding to the next steps, wait for HC Migration Utility pod to be running.

    kubectl get pods -n expertflow
    
  5. For RKE2-based Ingresses using Ingress-Nginx Controller

    kubectl apply -f Ingresses/nginx/
    

For Helm-based Deployment:-

update the ingressRouter value in the helm/values.yaml file with the FQDN of the host.
for example:-

global:
  ingressRouter: "devops212.ef.com"
  ingressCertName: "ef-ingress-tls-secret"
  ingressClassName: "nginx"
  commonIngressAnnotations: {}
  efCxReleaseName: "ef-cx"

Run the following command to deploy the migration utility helm chart:-

helm upgrade --install --namespace=expertflow --set global.efCxReleaseName="ef-cx" migration-utility --debug --values=helm/values.yaml helm/

Step 6: Verify Deployment

Verify the deployment on https://{FQDN}/migration-utility