Skip to main content
Skip table of contents

HC Migration Utility Deployment Guide

his document illustrates the procedure and steps to deploy HC migration utility on Kubernetes.

Before you begin, verify

Migration utility requires a backup of MongoDB data.

Prepare HC Migration Utility for Deployment

Step 1: Backup & Restore MongoDB

Backing up chatsolution database in HC

  1. Exec into the MongoDB container in HC host

    BASH
    docker exec -it expertflow_mongo_1 sh
  2. Create the backup directory in / directory in the container

    BASH
    mkdir /backup
  3. Export the compressed backup in /backup directory

    BASH
    mongodump --db=chatsolution --archive=/backup/chatsolution.tar.gz --gzip
  4. Verify that chatsolution.tar.gz exists in the /backup directory

    BASH
    ls -la /backup/
  5. Exit out of the container

    BASH
    exit
  6. Copy the chatsolution.tar.gz from the container to the host vm

    BASH
    docker cp expertflow_mongo_1:/backup/chatsolution.tar.gz chatsolution.tar.gz
  7. Copy the chatsolution.tar.gz to the CX host for restoring it.

Restoring chatsolution database in CX

export MongoDB certificates:-

CODE
mkdir /tmp/mongodb_certs
CERTFILES=($(kubectl get secret mongo-mongodb-ca -n ef-external -o go-template='{{range $k,$v := .data}}{{$k}}{{"\n"}}{{end}}'))
for f in ${CERTFILES[*]}; do   kubectl get secret mongo-mongodb-ca  -n ef-external -o go-template='{{range $k,$v := .data}}{{ if eq $k "'$f'"}}{{$v  | base64decode}}{{end}}{{end}}' > /tmp/mongodb_certs/${f} 2>/dev/null; done

run the MongoDB client pod using the following command:-

CODE
kubectl run --namespace ef-external    mongo-mongodb-client  --image docker.io/bitnami/mongodb:6.0.2-debian-11-r1 --command -- sleep infinity

copy the MongoDB certificates to client pod:-

CODE
kubectl -n ef-external cp /tmp/mongodb_certs mongo-mongodb-client:/tmp/

Run the following command to copy the backup into MongoDB client pod:-

CODE
kubectl -n ef-external cp <backup-filename> mongo-mongodb-client:/tmp/

Exec into MongoDB client pod:-

CODE
kubectl exec -it -n ef-external mongo-mongodb-client -- bash

connect to MongoDB using the following command:-

CODE
mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls  --tlsAllowInvalidHostnames  --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem  --tlsCAFile /tmp/mongodb_certs/client-pem

Once connected to MongoDB, create user and database by running the following commands:-

CODE
use chatsolution
CODE
db.createUser({
  user: "chatsolution",  
  pwd: "chatsolution",  
  roles: [
    { role: "read", db: "chatsolution" }
  ]
});

now disconnect from the MongoDB and run the following command in MongoDB client pod to restore the database:-

CODE
mongorestore --host "mongo-mongodb.ef-external.svc.cluster.local" --db chatsolution --gzip --archive=/tmp/<backup-filename> --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --ssl --sslPEMKeyFile /tmp/mongodb_certs/client-pem   --sslCAFile /tmp/mongodb_certs/mongodb-ca-cert

once the restoration is completed, you can check it by running the following command:-

CODE
mongosh admin --host "mongo-mongodb" --authenticationDatabase admin -u root -p Expertflow123 --tls  --tlsAllowInvalidHostnames  --tlsAllowInvalidCertificates --tlsCertificateKeyFile /tmp/mongodb_certs/client-pem  --tlsCAFile /tmp/mongodb_certs/client-pem --eval 'show dbs'

Step 2: Migrate MinIo Data

Follow these steps to migrate data from MinIO docker based container to Kubernetes container

  1. exec into MinIO docker container and navigate to the default location inside container

CODE
docker exec -it <minio_container_name> sh
cd data/
  1. Zip the bucket you want to migrate

CODE
zip -r <zip_directory_name> <bucket_to_zip>
  1. Exit out of MinIO container and copy the zipped file from inside container to host vm

CODE
exit
docker cp <minio_container_name>:<zipped_file_path> <host_vm_path>
  1. copy the zipped file to destination vm

CODE
scp <host_vm_zip_file_path> <user>@<dest_vm_ip>:<dest_vm_path>
  1. copy the zipped file to inside pod location

CODE
kubectl cp <path_to_file> <namespace>/<pod_name>:<inside_pod_path>
  1. unzip the file with following command

CODE
unzip <zipped_file> -d <path_to_unzip>
  1. restart the pod with following command

    CODE
    kubectl delete <pod_name> -n <namespace>

Step 3: Clone the repository

CODE
git clone https://efcx:RecRpsuH34yqp56YRFUb@gitlab.expertflow.com/cim/hc-migration.git
CODE
cd hc-migration/kubernetes/

Step 4: Update FQDN

HC Migration Utility should be accessible by a fully qualified domain name. Assign the FQDN.

CODE
sed -i 's/devops[0-9]*.ef.com/<FQDN>/g' ConfigMaps/*  Ingresses/nginx/*

Step 5: Deploy HC Migration Utility

For Kubernetes manifests:-

  1. Apply ConfigMap

    CODE
    kubectl apply -f ConfigMaps/
  2. Create services for HC Migration Utility

    CODE
    kubectl apply -f Services/
  3. Apply the Deployment manifest 

    CODE
    kubectl apply -f Deployments/
  4. Before proceeding to the next steps, wait for HC Migration Utility pod to be running.

    CODE
    kubectl get pods -n expertflow
  5. For RKE2-based Ingresses using Ingress-Nginx Controller

    CODE
    kubectl apply -f Ingresses/nginx/

For Helm-based Deployment:-

update the ingressRouter value in the helm/values.yaml file with the FQDN of the host.
for example:-

CODE
global:
  ingressRouter: "devops212.ef.com"
  ingressCertName: "ef-ingress-tls-secret"
  ingressClassName: "nginx"
  commonIngressAnnotations: {}
  efCxReleaseName: "ef-cx"

Run the following command to deploy the migration utility helm chart:-

CODE
helm upgrade --install --namespace=expertflow --set global.efCxReleaseName="ef-cx" migration-utility --debug --values=helm/values.yaml helm/

Step 6: Verify Deployment

Verify the deployment on https://{FQDN}/migration-utility

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.