Deployment Guide

Solution Prerequisites

The following are the solution setup prerequisites.

Hardware requirements

For HA deployment, each machine in the cluster should have the following hardware specifications.


Minimum requirement

CPU

2 cores vCPU (4 cores for non-HA deployment)

RAM

4 GB (8 GB for non-HA deployment)

Disk

100 GB mounted on /

NICs

1 NIC

Software Requirements

OS Compatibility

We require the customer/partner to install the following software on the server.

Item

Version

CentOS 7

Administrative privileges (root) are required to follow the deployment steps.

Database Requirements

Item

Notes

MS SQL Server 2016 Express/ Standard/ Enterprise


Docker Engine Requirements

ItemNotes

Docker CE

Docker CE 18+ 

Docker-composeVersion 1.23.1

Browser Compatibility

Item

Version

Notes

Chrome

the latest


FirefoxNot tested
IENot testedAn on-demand testing cycle can be planned


Cisco Unified CCX Compatibility

11.5 and higher (Enhanced & Premium)


Installation Steps

The Internet should be available on the machine where the application is being installed and connections on port 9242 should be allowed in the network firewall to carry out the installation steps. All the commands start with a # indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Allow ports in the firewall

For internal communication of docker swarm, you'll need to allow the communication (both inbound and outbound) on the ports: 8899/tcp and 4499/tcp.

To start the firewall on CentOS (if it isn't started already), execute the following commands. You'll have to execute these commands on all the cluster machines.: 

# systemctl enable firewalld
# systemctl start firewalld

To allow the ports on CentOS firewall, you can execute the following commands. You'll have to execute these commands on all the cluster machines.

# firewall-cmd --add-port=8899/tcp --permanent
# firewall-cmd --add-port=4499/tcp --permanent

# firewall-cmd --reload

Configure Log Rotation

Add the following lines in /etc/docker/daemon.json file (create the file if not there already) and restart the docker daemon using systemctl restart docker. Perform this step on all the machines in the cluster in case of HA deployment.

{  
    "log-driver": "json-file"
    "log-opts": {
        "max-size": "50m",
        "max-file": "3"
    
}


Creating Databases

Create a database for UMM in the MSSQL/MYSQL server with a suitable name and follow the application installation steps.

Installing Application

  1. Download the deployment script supervisor-tools-deployment.sh and place it in the /root directory. This script will:
    1. delete the supervisor-tools-deployment directory in the present working directory if it exists.
    2. clone the supervisor-tools-deployment repository from gitlab in the present working directory.
  2. To execute the script, give it the execute permissions and execute it. 

    # chmod +x supervisor-tools-deployment.sh
    # ./supervisor-tools-deployment.sh

  3. We will need to setup Vault before staring the application for database password encryption. We will use a single database user for all components of the application. If Vault is already setup up and running on some other server then we will just need to update two variables in a file in the next step. Follow the following steps to set up Vault locally.
    1. Run these commands inside /root/supervisor-tools-deployment directory 

      chmod 777 initVault.sh
      ./initVault.sh
    2. Open this URL in browser http://supervisor-tools-ip/:8200/ui/vault/init
    3. A form will open, enter 2 in both fields and click on Initialize button 
    4. The Initialize button will create a token and two keys. Token will be used for authentication and the two keys for unsealing the Vault. Click on the eye icon and copy these three values, save them and click on Proceed to Unseal button.
    5. On the next screen, enter the first key in form and click on the Unseal button, enter the second key on the next page and click on the Unseal button.
    6. Now enter the token and click on Sign in button. Vault is now configured and we can create secrets. A default Cubbyhole secret engine is created and we will use this engine to create a secret for the database password
    7. Click on the cubbyhole engine under secrets and then click on the Create secret button.
    8. In create the secret form, enter secret/database in the path and under secret data, enter  db_password in name and database password as its value, create on Add and then on Save button. A secret for database password in created now.
    9. Notes: The path should be secret/database and secret name should be db_password. Vault should be unsealed if components need a restart or any of the components is restarted. Vault should remain unsealed as long as all components are not started. 
  4. Update environment variables in the following files inside /root/supervisor-tools-deployment/docker/environment_variables folder.

    1. environment-variables.env

      NameDescription

      Do not change the default values for non-HA deployment. For HA, use SQL server cluster settings instead of the defaults.

      Set the following environment variable values according to database connectivity and other configurations.

      For TAM:

      TAM_DB_URL

      Database connection url

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      TAM_DB_USERdatabase user
      TAM_DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      TAM_DB_DIALECT

      Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect

      TAM_PORTi:e 8080 (Port on which Team administrator is deployed) Required in case of UCCX only.
      For KPI: (UCCE Only)
      KPI_DB_URL

      Database connection url

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      KPI_DB_USERdatabase user
      KPI_DB_PASSdatabase password
      KPI_DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      KPI_DB_DIALECT

      Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect

      For Prompts:
      PROMPT_DB_URL

      Database connection url

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      PROMPT_DB_USERdatabase user
      PROMPT_DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      PROMPT_DB_DIALECT

      Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect

      TAM_PORTi:e 8080 (Port on which Team administrator is deployed) Required in case of UCCX only.
      For EABC:
      EABC_DB_URL

      Database connection url

      For example:

      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name
      • jdbc:jtds:sqlserver://<MACHINE-IP or FQDN>:PORT/db_name;instanceName=SomeInstance
      EABC_DB_USERdatabase user
      EABC_DB_DRIVERJDBC driver e.g., net.sourceforge.jtds.jdbc.Driver
      EABC_DB_DIALECT

      Database dialect e.g., org.hibernate.dialect.SQLServer2008Dialect

      TAM_PORTi:e 8080 (Port on which Team administrator is deployed) Required in case of UCCX only.
      For UMM:
      PRIM_FINESSE_IP

      Primary Finesse URL including port (if not 80 or 443)

      For example:

      SEC_FINESSE_IP

      Secondary Finesse URL including port (if not 80 or 443)

      For example:

      FINESSE_USERFinesse administrator user
      FINESSE_PASSFinesse administrator password
      DB_URLUMM Database
      DB_DRIVERUMM Database driver

      DB_DIALECT

      UMM Database dialect
      DB_USERUMM Database username

      ADMIN_PASS

      The password of the admin user

      SSL_TRUST_STORE_PATH

      Path of the SSL truststore including the name. This truststore should include UCCX SSL certificates if Finesse APIs needed to access via HTTPS
      SSL_TRUST_STORE_PASSWORDTruststore password
      SSO_ENABLEDThis variable would enable or disable UCCX SSO functionalities. Possible values are: true or false
      REDIRECT_BASE_URLCallback URL for supervisor tools.. The format would be: https://IP:umm-port/umm/base/index.html
      IDS1_URLProvide the base URI of UCCX node. The format would be: https://<fully qualified host name of UCCX publisher node>:8553
      IDS2_URLIf UCCX is deployed in High Availability mode, provide the base URI of the second node. The format would be: https://<fully qualified host name of UCCX Subscriber node>:8553
      IDS_CLIENT_IDRegister supervisor tools application in IDS to get client ID by following the steps here. Example:
      973a8f41be45426510c971ce41b6feae8d71bc22
      UMM_BASE_URL

      UMM base URL. It should be: https://IP:umm-port

      For all components:
      TOKEN_URLVault server url, http://VAULT-IP:PORT. If Vault is setup following step 3, then do not change the default value.
      TOKENVault master token
      SUP_VERSIONKeep it default, do not change it.
  5. Get domain/CA signed SSL certificates for SupervisorTools FQDN/CN and place the files in /root/supervisor-tools-deployment/docker/certificates folder. The file names should be server.crt and server.key.
  6. Copy the supervisor-tools-deployment directory to the second machine for HA. Execute below command 

    # scp -r supervisor-tools-deployment root@machine-ip:~/

  7. Go to the second machine and update the environment variables where necessary.
  8. Execute the following commands inside /root/supervisor-tools-deployment directory on both machines.

    # chmod 755 install.sh
    # ./install.sh

  9. Run the following command to ensure that all the components are up and running. The screenshot below shows a sample response for a standalone non-HA deployment. 

    # docker ps


Virtual IP configuration

Repeat the following steps for all the machines in the HA cluster.

  1. Download keepalived.sh script and place it in /root directory.
  2. Give execute permission and execute the script: 

    # chmod +x keepalived.sh
    # ./keepalived.sh

  3. Configure keep.env file inside /root/keep-alived folder

    Name

    Description

    KEEPALIVED_UNICAST_PEERS

    Pear machine IP. 192.168.1.80

    KEEPALIVED_VIRTUAL_IPSVirtual IP of the cluster. It should be available in the LAN. For example: 192.168.1.245
    KEEPALIVED_PRIORITYPriority of the node. Instance with lower number will have a higher priority. It can take any value from 1-255. 
    KEEPALIVED_INTERFACEName of the network interface with which your machine is connected to the network. On CentOS, ifconfig or ip addr sh will show all the network interfaces and assigned addresses. 
    CLEARANCE_TIMEOUTCorresponds to the initial startup time of the application in seconds which is being monitored by keepalived. A nominal value of 60-120 is good enough
    KEEPALIVED_ROUTER_IDDo not change this value.
    SCRIPT_VAR

    This script is continuously polled after 2 seconds. Keepalived relinquishes control if this shell script returns a non-zero response. It could be either umm or any backend microservice API.

    pidof dockerd && wget -O index.html http://localhost:7575/

  4.  Update the GAT_URL variable in environment variables to hold Virtual IP for front-end. Go to microservices table in UMM database, replace the name of tam, prompts and eabc microservices under ip_address column with Virtual IP. Change the ports to corresponding ports exposed in docker-compose file.

  5. Give the execute permission and execute the script on both machine.

    # chmod +x keep-command.sh
    # ./keep-command.sh


Adding License

  1. Browse to http://<MACHINE_IP or FQDN>/umm in your browser (FQDN will be the domain name assigned to the IP/VIP). 
  2. Click on the red warning icon on right, paste the license in the field and click save.