Skip to main content
Skip table of contents

HC Backup and Restore from Legacy to New Architecture

CODE
$ efutils login

# Please select expertflow_mongo_1

$ mongodump --gzip --out /data/mongodb/backup


IMPORTANT: This may take several minutes to complete depending upon the amount of data in database.

WARNING: Please do not interrupt this operation, as it may corrupt the whole database. 

# exit the mongo service container

$ exit.

Backup 

To create backup of the databases, please follow these steps. 

Preparation

 Create a folder for database backups 

CODE
$ mkdir /root/db_backup    



MySQL




Login to the mysql service container and dump all the databases ( for release version of 3.11.0 and above)

CODE
$ efutils login

# Please select expertflow_mysql_1 

$ mysqldump -u root --password='root' --all-databases >/root/db_backup/ef-mysql-dump.sql

# exit the mysql container using exit command

$ exit





for all releases prior to 3.11.x , please exec into the mysql service container and take backup manually.


CODE
# shutdown all the service containers to avoid data corruption.

# change to the deployment path

$ cd /root/HC3.9.4/docker/data/

# archive the mysql  data folder

$ tar cvf /root/db_backup/mysql.tar mysql


MySQL backup for HC 3.9.4   is not compatible with HC 3.14.0 and onwards, so you will have to re-sync UMM with Finesse and create Business Calendars.



Mongo 


This procedure is valid for Hybrid-Chat 3.10.0  and above releases. For earlier releases, please follow the manual procedure of taking and restoring a backup.

On a system with large dataset size, memory and CPU caps may need to be removed for 'mongodump' to work properly. This can be achieved by adding '#' infront of all lines in docker-compose-mongo.yml  for all lines specifying CPU and memory caps.


Login into the Mongo service containers and perform these steps.( From Hybrid-Chat release 3.11.0 till 3.14.0)

From 3.11.0 to 3.14.0 and 3.15.x in HA

CODE
$ efutils login

# Please select expertflow_mongo_1

$ mongodump --gzip --out /data/mongodb/mongo-backup


IMPORTANT: This may take several minutes to complete depending upon the amount of data in database.

WARNING: Please do not interrupt this operation, as it may corrupt the whole database. 

# exit the mongo service container

$ exit.


For 3.15.x in Singleton only and 3.16.0 and above (both HA and Singleton)

CODE
$ efutils login

# Please select expertflow_mongo_1

$ mongodump --gzip --out /bitnami/mongodb/mongo-backup


IMPORTANT: This may take several minutes to complete depending upon the amount of data in database.

WARNING: Please do not interrupt this operation, as it may corrupt the whole database. 

# exit the mongo service container

$ exit.

Copy the resulting mongo backup folder 

CODE
$ cp -rp /var/lib/expertflow/docker/data/mongo/data/mongo-backup /root/db_backup


For 3.10.0 and previous releases, please follow these steps to take Mongo Backup 


CODE
# shutdown all the service containers to avoid data corruption.

# change to the deployment path

$ cd /root/HC/docker/data/

# archive the mongo data folder

$ tar cvf /root/db_backup/mongo.tar mongo




Restore

Copy the resulting database dumps to the central storage volume of xdata

CODE
$ \cp -rp /root/db_backup $(docker volume  inspect --format "{{.Mountpoint}}/" expertflow_xdata)


MySQL



Login to the MySQL Service Container

CODE
$ efutils login 

# and then select expertflow_mysql_1


Once inside the MySQL service container,  please follow these steps to load the backup 


CODE
$ mysql --user=root --password='root'  < /xdata/db_backup/ef-mysql-dump.sql


if there is a version change involved between the old and new MySQL , please execute 'mysql_upgrade -u root -proot' to ensure that the database dump imported is upgraded to new version of MySQL


For mysql.tar based archived backup, please use this procedure. 


CODE
Change to the deployment path

$ cd $(docker volume  inspect --format "{{.Mountpoint}}/" expertflow_xdata)

preserve the original mysql folder

$ mv * /tmp/mysql

Restore the Backup archive.

$ tar xvf /root/db_backup/mysql.tar -C $(docker volume  inspect --format "{{.Mountpoint}}/" expertflow_xdata) 







Mongo



Prepare 


Expand the archive to folder


CODE
$ unzip -o -d mongo-backup db_backup/mongo-backup.zip



Copy the expanded backup archive folder to the DEPLOYMENT_PATH


CODE
$ cp -rpv  mongo-backup/var/lib/expertflow/docker/data/mongo/data/backup /var/lib/expertflow/docker/data/mongo/data/


the source path in above command may vary depending upon the deployment path, please select the above path carefully by exploring the expanded mongo-backup folder and then traversing to the backup folder for path confirmation.







login to the Mongo Service container 

CODE
$ efutils login

# and select expertflow_mongo_1



once logged in, please load the data using

CODE
$ mongorestore --verbose --drop /data/mongodb/backup


on a system with large dataset size, memory and CPU caps may need to be removed for 'mongodump' to work properly. This can be achieved by adding '#' infront of all lines in docker-compose-mongo.yml  for all lines specifying CPU and memory caps.




For Mongo.tar based restore , please follow this procedure


CODE
Change to the deployment path

$ cd /var/lib/experflow/docker/data

preserve the original mongo folder

$ mv mongo{,-Orig}

Restore the Backup archive.

$ tar xvf mongo.tar



Special Scenario 



Hybrid-Chat upgrade involving 3.9.4, 3.10.2 and 3.13.x to any  newer release involves an additional step to cover the database combability matric.  All previous release prior to 3.14.x use Mongo version 4.0 where the newer releases of Hybrid-Chat include 4.4 version of Bitnami based Mongo docker image. These 4.0 and 4.4 are not directly compatible with each other and we need to involve a 4.2 Mongo image for data transformation.



Copy the  DEPLOYMENT_PATH/docker/data/mongo folder from source release to  a new location on a system with internet access enabled.



CODE
$ mkdir -p  /root/mongo4.2/data/db
$ cp -rpv  /root/HC3.9.4/docker/data/mongo/data/* /root/mongo4.2/data/db/
$ chown -R 1001:1001 /root/mongo4.2


and now run a new container with 4.2 version of MongoDB to upgrade the dataset.


CODE
$ docker run   -d --name mongo4.2  -v /root/mongo4.2:/bitnami/mongodb  bitnami/mongodb:4.2


then run the command sequence given below to upgrade the dataset for 4.2 compatibility.


CODE
$ docker exec -it -u 0 mongo4.2 bash
$ mongo
>  db.adminCommand( { setFeatureCompatibilityVersion: "4.2" } )
>  db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
> exit
# exit

NOTE: prompt ">" represents inside the mongo shell



The ` db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )` should return '4.2' as compatible version 


take dump of the dataset using 


CODE
$ docker exec -it -u 0 mongo4.2 bash
$ mongodump --out /bitnami/mongodb/backup4.2


copy the resulting backup from /root/mongo4.2/backup to a safe location on the system

CODE
$ cp -rpv /root/mongo4.2/backup4.2 /root/mongo_upgraded_4.2

remove the container using 


CODE
$ docker rm -f  mongo4.2



Once the Hybrid-Chat from a newer release involving 4.4 version of Mongo is deployed ( for example 3.14.x or 3.15.x ), please deploy the solution completely and then copy the resulting backup from 4.2 to newer deployment path,

CODE
$ cp -rpv /root/mongo_upgraded_4.2 /var/lib/expertflow/docker/data/mongo/

once the solution is completely up and running, please load the data from 4.2 dump using 


CODE
$ efutils login

# and select expertflow_mongo_1

$ mongorestore --verbose --drop /bitnami/mongodb/mongo_upgraded_4.2 



Once the newer version of Hybrid-Chat is deployed successfully, execute below commands inside the mongo service container


CODE
$ efutils login

# and select expertflow_mongo_1

$ mongo
>  db.adminCommand( { setFeatureCompatibilityVersion: "4.4" } )
>  db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
> exit
$ exit


Your Mongo Dataset is now upgrade to 4.4 version of Mongo .

Once the data is imported, login into the mongo container and execute



Verification


MySQL


Login to the MySQL service container and select the number of db users imported from old backup archives.

CODE
$ efutils login
# select expertflow_mysql_1 


verify list of agents in the db_user table of umm database.


CODE
# mysql -u root -proot
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10284
Server version: 10.4.13-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [umm]> select count(*) from umm.db_user;
+----------+
| count(*) |
+----------+
|      426 |
+----------+
1 row in set (0.001 sec)

MariaDB [umm]>




Mongo



login to the mongo service container 


CODE
$ efutils login

# select the expertflow_mongo_1 


and get a list of all queues defined and imported from the old backup archives.


CODE
/ # mongo
MongoDB shell version v4.0.5
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5412cd30-6cf4-45fd-bca1-8803763ca3b0") }
MongoDB server version: 4.0.5
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
        http://docs.mongodb.org/
Questions? Try the support group
        http://groups.google.com/group/mongodb-user
Server has startup warnings:
2021-03-22T23:13:18.782+0000 I STORAGE  [initandlisten]
2021-03-22T23:13:18.782+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2021-03-22T23:13:18.782+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten]
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten]
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten]
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten]
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2021-03-22T23:13:20.872+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2021-03-22T23:13:20.873+0000 I CONTROL  [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> show databases;
admin         0.000GB
chatsolution  0.000GB
config        0.000GB
local         0.000GB
mre           0.000GB
> use chatsolution
switched to db chatsolution
> show collections;
agents
conversations
messages
queues
tasks
> db.queues.find({})
{ "_id" : ObjectId("60543f3660e9f2000817ecb8"), "EnqueuedTasks" : [ ], "Name" : "DefaultPrecisionQueue", "__v" : 0 }
>
> exit
bye
/ # exit



JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.