Skip to main content
Skip table of contents

Data Migrations

Executive Summary

With the release of CX 4.7, incoming data will now adhere to the updated 4.7 schema, reflecting the schema modifications introduced in previous versions. However, legacy data stored in MongoDB remains in the format of earlier releases, necessitating a migration to align it with the updated schema.

To ensure compatibility between legacy data and the current release, data migration scripts have been developed. These scripts efficiently transform the data from the previous format to the 4.7 schema and are seamlessly integrated into the Experflow ETL Data Platform for automation and streamlined execution.

Migration Plan

We have developed migrating data pipeline. Each pipeline serves a distinct purpose and should be executed in a provided order to ensure the migration operation is completed successfully. The workflows are flexible, allowing pipelines to be executed in different intervals based on the requirements.

Before performing the activity, make sure that the collections conversations and CustomerTopicEvents have indexes in place within the mongodb
Indexes on conversations:

  1. customer._id

  2. endTime

  3. agentParticipants.agentParticipant_id

  4. agentParticipants.username

  5. conversationDirection

  6. creationTime

Indexes on CustomerTopicEvents

  1. customerId

  2. cimEvent.name

  3. cimEvent.type

  4. cimEvent.channelSession.customer._id

  5. cimEvent.channelSession.customerSuggestions._id

  6. cimEvent.channelSession.roomInfo.mode

  7. timestamp

  8. topicId

  9. recordCreationTime

Pre-requisites

  1. Mongo database with data available in 4.4 format within conversations-manager_db, adminPanel and routing-engine_db.

  2. TLS certificates (mongo-ca-cert, client-pem) of the mongodb, if required.

  3. Expertflow ETL Data platform should be deployed and accessible. Deployment Guide: Expertflow ETL Deployment

Scope of Migration

  • The source databases upon which this activity is to be perform are conversation-manager_db, adminPanel, and routing-engine_db.

  • Use this configuration file: data_migration_config.yaml .

  • Place this file in transflux/config.

  • Open the config file transflux/config/data_migration_config.yaml.

Configurations

host should be write operation pod primary i.e. mongo-mongodb-0.mongo-mongodb-headless.ef-external.svc.cluster.local

  • host, port, username, password: Refers to the MongoDB credentials.

  • Within the bulk_repeat, are the configuration for conversations migration, running in batches.

    • mongo_db: conversation-manager_db database.

    • js_file_path: path where the migration scripts are placed.

    • start_date: Start date for the data migration.

    • end_date: End date for the data migration.

    • interval: Minute-wise interval that filters the data to be processed within a specific timeline.

  • Within the RE_adminPanel, are the configurations for routing-engine_db and adminPanel data migration

  • Within the conversation_dropIndex, are the configurations for conversations drop index

  • tls: TLS flag that determines if the mongo database supports only TLS verified connection

    • tls_ca_file: path for mongo-ca-cert file

    • tls_cert_key_file path for client-pem file

When changes are made, it is essential to delete the existing ConfigMap ef-transflux-config-cm and recreate it using the following command:
kubectl -n expertflow create configmap ef-transflux-config-cm --from-file=config

To follow along with the demonstration of data migration:

Migration Activity ( CX4.4.10 to CX4.7)

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.