Breadcrumbs

Installation Guide


Internet should be available on the machine where the application is being installed, and connections on port 9242 should be allowed in the network firewall to carry out the installation steps.

All the commands start with a #, indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.

Requirements for Voice Recording Solution


There are two types of installation: EFCX and Cisco (UCCX & UCCE). For EFCX, most of the steps are not required, as Keycloak, JtapiConnector, and Mixer are not required.

Allow ports in the firewall for VRS

If there is an active firewall, allow the following ports.

443/tcp
444/tcp
8088/tcp
5060/tcp (only for Cisco)
16386-32768/udp (only for Cisco)
8021/tcp
1433 /tcp
5432 /tcp


# Additional port to open in case of High Avaliability (HA)
8500
8300
8301 /tcp/udp
8302 /tcp/udp
8303 /tcp/udp
8600 /tcp/udp

Installation Steps

  1. Please make sure that Solution Prerequisites are met for the desired deployment type. 

  2. Download the deployment script deployment.sh and place it in the user’s home or any desired directory. This script will:

    1. Delete the recording-solution directory if it exists.

    2. Clone the required files for deployment

  3. To execute the script, give it permissions to execute and run it as follows.

    Bash
    $ chmod 755 deployment.sh                  
    $ ./deployment.sh
    

This command will clone the skeleton project: recording-solution. This recording-solution directory contains all the required files for deployment. It will be cloned in the same place as the deployment script is placed.

Now, our cloning has been completed. Our VRS files, deployment files, and directories have been downloaded. We can proceed to configure it.

  1. Follow this guide to install and configure Freeswitch. The recording path in Free SWITCH and in Docker Compose volume must be the same.

  2. Follow this guide to configure ESL. (for Pause and Resume Recording)

  3. Follow this guide to create an application user on CUCM for JTAPI Connector.

  4. Create a database in SQL Server for VRS with the name vrs and run the SQL script (sqlserver.sql) located in recording-solution/data/scripts. This script will generate the required database tables.

  5. Navigate to recording-solution/docker directory.

  6. Open the docker-compose-cisco.yml file.

    • For non-HA deployments: Uncomment the archival-process container and keep the consul container commented.

    • For HA deployments: Uncomment the consul container and keep the archival-process container commented.
      After making the changes, save and close the file.

  7. Open config.env in the same directory and update the environment variables given below.

Name

Description

1

VRS_URL

Keep it as “http”

2

TZ

Time Zone. e.g, Asia/Karachi

3

LOCAL_MACHINE_IP

FQDN that is mapped to Media Server IP (only used for on-prem single tenant deployment)

4

EFCX_FQDN

FQDN of EFCX. e.g. https://FQDN. This variable is only used for Single Tenant (on-prem) Deployment

5

DEPLOYMENT_PROFILE

Set it either CISCO or EFCX as per the deployment profile

6

PEER_ADDRESS

Add the IP or FQDN of the Second Recorder VM (For HA)

7

JTAPI_HA_MODE

Set it true for HA and false for Non-HA

8

SCREEN_RECORDING

If Screen recording is enabled, set it to true; otherwise, false

9

CONSUL_URL

(Only for CISCO HA) Add the IP address with port 8500 of the local machine http://192.168.1.101:8500

10

CISCO_TYPE

Type of CISCO, either UCCX or UCCE

11

DIRECTORY_PATH_TO_MONITOR

The path for the archival process to monitor should be the same path where sessions are kept, e.g, /var/vrs/recordings/cucmRecording/sessions/

12

FINESSE_URL

Url or FQDN of FInesse e.g https: //uccx12-5p.ucce.ipcc:8445

13

RETRY_LIMIT

Limit to retry in case the connection fails. e.g 2

14

ARCHIVAL_PROCESS_NODE

active

15

CUCM_APPLICATION_USER_NAME

CUCM Application username that has been created in step 6.

16

CUCM_APPLICATION_USER_PASSWORD

Password for the CUCM Application User.

17

CUCM_IP

IP address of Call Manager

18

DEPLOYMENT_PROFILE

Profile that we want to use for Backend “CISCO” or “EFCX“

19

LICENSE_CHECKING

false (Keep it false)

20

NO_OF_DEL_DAYS

No of days after which the streams will be deleted

The below Env variables are only for UCCX.

Name

Description

1

CCX_PRIMARY_IP

Primary UCCX IP address. e.g 192.168.1.33

2

CCX_SECONDARY_IP

Secondary UCCX IP, e.g, 192.168.1.33

3

CCX_ADMIN_USERNAME

CCX Admin username

4

CCX_ADMIN_PASSWORD

CCX Admin password

The below Env variables are only for UCCE.

Name

Description

1

UCCE_IP

UCCE IP

2

UCCE_DATABASE

UCCE awdb database name

3

UCCE_USERNAME

UCCE awdb database user’s username

4

UCCE_PASSWORD

UCCE awdb database user’s password

KeyCloak variables no longer needed.


Names

Description

1

KEYCLOAK_REALM_NAME

Realm name from EFCX Keycloak. Add “expertflow

2

KEYCLOAK_CLIENT_ID

KeyCloak client id from EFCX Keycloak. Add “cim”

3

KEYCLOAK_CLIENT_SECRET

Copy it from Realm>Clients>cim>Credentials

4

KEYCLOAK_URL

FQDN of EFCX, e.g., https://efcx-fqdn.expertflow.com

  1. Add the Following environment variables for pause and resume recording.


Names

Description

1

ESL_HOST

IP address of the Recorder Machine

2

ESL_PORT

Port of the Record where ESL commonly used 8021

3

ESL_PASSWORD

Password of ESL

4

REC_PATH_STREAMS

Path where streams are saved, e.g, vrs/var/recordings/cucmRecording/streams

Update the Database environment variables in config.env.


Name

Description

1

DB_DRIVER

Driver on which database is running, i.e, postgres, mysql, or SQL Server driver

2

DB_ENGINE

Engine on which the database is running, i.e, PostgreSQL, MySQL, or SQL Server

3

DB_HOST

IP Address of the host on which the database is active

4

DB_NAME

Name of the database. In case of EFCX, it can be fetched from config.conf on this path:

/etc/fusionpbx/config.conf

5

DB_USER

Username for the database. In case of EFCX, it can be fetched from config.conf on this path:

/etc/fusionpbx/config.conf/

6

DB_PASSWORD

Password for the database. In case of EFCX, it can be fetched from config.conf on this path:

/etc/fusionpbx/config.conf/

7

DB_PORT

Port of the Database

Update some additional variables with respect to the multitenancy.

Name

Description

ROOT_DOMAIN

For Multitenant Deployment-> Root Domain of the VRS ngnix, eg vrs.expertflow.com (value will be equal to tenantId for on-prem deployment)

TENANT_URL

CX Tenant url to fetch tenants, e.g., (https://tenant4.expertflow.com/cx-tenant/tenant)
We can add any of the FQDNs that are mapped in EFCX in place of TENANT_URL

CX_ROOT_DOMAIN

Root domain of the CX Solution, eg, (expertflow.com) (value will be “NIL“ for on-prem deployment)

  1. For EFCX Deployment. the line <param name="url" value="http://127.0.0.1/app/xml_cdr/xml_cdr_import.php"/> in the xml_cdr.conf.xml file should be uncommented. Run the following command to open the xml_cdr.conf.xml and uncomment the above line.


nano /etc/freeswitch/autoload_configs/xml_cdr.conf.xml

Once the file is open, then uncomment the line and save it. Then restart the Freeswitch. systemctl restart freeswitch

  1. For EFCX Deployment profile: The root domain vrs.expertflow.com should be mapped to the IP address of the VRS server. Ask IT to configure this.

  2. Each tenant’s root domain should be routed to vrs.expertflow.com on port 444.

    For example, if we have two tenants — mtt01 and mtt02 — and the root domain is vrs.expertflow.com, then:

    This will make the VRS accessible at the following URLs:

  3. Similar to the EFCX setup, an FQDN (Fully Qualified Domain Name) must be obtained from the IT team to access the VRS application with Cisco Deployment Profile.

    The root domainvrs.expertflow.com — should remain static and be routed to the VRS Server IP. Tenant-specific subdomains will be created by prefixing the tenant name to the root domain. Each tenant subdomain must route to the CX Server IP (or the corresponding VRS Server IP if applicable).

https://mtt01.vrs.expertflow.com

In this configuration:

  • vrs.expertflow.com and mtt01.vrs → Points to the VRS Server IP

VRS with Cisco Deployment Profile is only supported for a Single CX Tenant.

  1. To update the self-signed certificates for VRS, get the public authority or domain signed certificate .crt and .key files, name them server.crt and server.key, and replace the files in /recording-solution/config/certificates with these two new files. Names should be the same.

  2. Navigate to the recording-solution directory and assign permissions to the install script.

  3. chmod 755 install-cisco.sh  #in case of CISCO
    chmod 755 install-efcx.sh   #in case of EFCX
    chmod 755 install-replay.sh #in case of HA CISCO
    
  4. Run ./install-efcx.sh for EFCX or run ./install-cisco.sh for Cisco UCCX and UCCE.

  5. Run the following command to ensure all the components are running. 

    # docker ps
    
  6. Once VRS is deployed with EFCX profile. Run this command to give the permission to recording directory

    chmod 777 -R /var/lib/freeswitch/recordings/
    
  7. In the case of Cisco, see Step #13 to access the application, whereas for EFCX check Step #12 for Accessibility.

  8. For HA-specific deployment, proceed following steps.

Copy the config.env file and paste it into the Recorder 2 VM and Replay Server, as most of the environment variables are the same

  1. Follow this guide for the creation of an rsync job on All VMs Recorder 1, Recorder 2, and Replay Server.

  2. Go to the replay server, add permission to install.sh script and then run ./install-replay.sh.

  3. Configure two SIP Trunks on Cisco Call Manager in HA Mode and set priorities for both Machines.

  4. Do the following steps for both recorder1 and recorder2.

  5. Open recoding-solution/docker/docker-compose-cisco

  6. Inside the docker-compose-cisco file, Uncomment the Consul Container.

  7. In the container_name variable, set the name as consul1 for Recorder 1 and consul2 for Recorder 2.

  8. Add your interface card. This can be found using the ifconfig or ip address command.

- CONSUL_BIND_INTERFACE=ens192   #ens192 or end32
  1. In the command section, set the name of the consul in the -node=<any-name> as shown in the code snippet below. This name must be different than the second recorder.

  2. Set the -advertise=<Local-Machine-IP> and set -retry-join=<IP-Second recorder> . Keep the other values as they are

#    command: "agent -node=consul-106 -server -ui -bind=0.0.0.0 -client=0.0.0.0 -advertise=192.168.1.106 -bootstrap-expect=2 -retry-join=192.168.1.101"
  1. Save changes and exit.

  2. Run ./install-cisco.sh

  3. Check containers on both recorders using the docker ps command

  4. Deploy the Recorder 2 in the same way.

  5. Deploy the replay server. For the replay server, just add the config.env file. and run the ./install-replay.sh command.