Internet should be available on the machine where the application is being installed, and connections on port 9242 should be allowed in the network firewall to carry out the installation steps.
All the commands start with a #, indicating that root user privileges are required to execute these commands. Trailing # is not a part of the command.
Requirements for Voice Recording Solution
-
Replay Server for HA
-
SQL Server 2019
-
Two SIP Trunks for HA
-
VRS solution on two separate machines for HA
-
EFCX Server (for KeyCloak)
-
Docker and Docker Compose
-
Git
-
An fqdn for VRS nginx in a format like vrs.<ef-cx-rootDomain) in case multi-tenancy. eg, if CX core root domain is expertflow.com, vrs root domain will be vrs.expertflow.com
-
Tenant Name.ROOT DOMAIN should be routed to this vrs.expertflow.com on port 444. For example, we have two tenants, mtt01 and mtt02, and our root domain is vrs.expertflow.com. mtt01.vrs and mtt02.vrs should be routed to vrs.expertflow.com on port 444 so the VRS can be accessible on the following URLs:
-
https://mtt01.vrs.expertflow.com:444/#/login and https://mtt02.vrs.expertflow.com:444/#/login
There are two types of installation: EFCX and Cisco (UCCX & UCCE). For EFCX, most of the steps are not required, as Keycloak, JtapiConnector, and Mixer are not required.
Allow ports in the firewall for VRS
If there is an active firewall, allow the following ports.
443/tcp
444/tcp
8088/tcp
5060/tcp (only for Cisco)
16386-32768/udp (only for Cisco)
8021/tcp
1433 /tcp
5432 /tcp
# Additional port to open in case of High Avaliability (HA)
8500
8300
8301 /tcp/udp
8302 /tcp/udp
8303 /tcp/udp
8600 /tcp/udp
Installation Steps
-
Please make sure that Solution Prerequisites are met for the desired deployment type.
-
Download the deployment script deployment.sh and place it in the user’s home or any desired directory. This script will:
-
Delete the recording-solution directory if it exists.
-
Clone the required files for deployment
-
-
To execute the script, give it permissions to execute and run it as follows.
Bash$ chmod 755 deployment.sh $ ./deployment.sh
This command will clone the skeleton project: recording-solution. This recording-solution directory contains all the required files for deployment. It will be cloned in the same place as the deployment script is placed.
Now, our cloning has been completed. Our VRS files, deployment files, and directories have been downloaded. We can proceed to configure it.
-
Follow this guide to install and configure Freeswitch. The recording path in Free SWITCH and in Docker Compose volume must be the same.
-
Follow this guide to configure ESL. (for Pause and Resume Recording)
-
Follow this guide to create an application user on CUCM for JTAPI Connector.
-
Create a database in SQL Server for VRS with the name vrs and run the SQL script (sqlserver.sql) located in
recording-solution/data/scripts. This script will generate the required database tables. -
Navigate to
recording-solution/dockerdirectory. -
Open the
docker-compose-cisco.ymlfile.-
For non-HA deployments: Uncomment the
archival-processcontainer and keep theconsulcontainer commented. -
For HA deployments: Uncomment the
consulcontainer and keep thearchival-processcontainer commented.
After making the changes, save and close the file.
-
-
Open
config.envin the same directory and update the environment variables given below.
|
Name |
Description |
|
|---|---|---|
|
1 |
VRS_URL |
Keep it as “http” |
|
2 |
TZ |
Time Zone. e.g, Asia/Karachi |
|
3 |
LOCAL_MACHINE_IP |
FQDN that is mapped to Media Server IP (only used for on-prem single tenant deployment) |
|
4 |
EFCX_FQDN |
FQDN of EFCX. e.g. https://FQDN. This variable is only used for Single Tenant (on-prem) Deployment |
|
5 |
DEPLOYMENT_PROFILE |
Set it either CISCO or EFCX as per the deployment profile |
|
6 |
PEER_ADDRESS |
Add the IP or FQDN of the Second Recorder VM (For HA) |
|
7 |
JTAPI_HA_MODE |
Set it true for HA and false for Non-HA |
|
8 |
SCREEN_RECORDING |
If Screen recording is enabled, set it to true; otherwise, false |
|
9 |
CONSUL_URL |
(Only for CISCO HA) Add the IP address with port 8500 of the local machine http://192.168.1.101:8500 |
|
10 |
CISCO_TYPE |
Type of CISCO, either UCCX or UCCE |
|
11 |
DIRECTORY_PATH_TO_MONITOR |
The path for the archival process to monitor should be the same path where sessions are kept, e.g, /var/vrs/recordings/cucmRecording/sessions/ |
|
12 |
FINESSE_URL |
Url or FQDN of FInesse e.g https: //uccx12-5p.ucce.ipcc:8445 |
|
13 |
RETRY_LIMIT |
Limit to retry in case the connection fails. e.g 2 |
|
14 |
ARCHIVAL_PROCESS_NODE |
active |
|
15 |
CUCM_APPLICATION_USER_NAME |
CUCM Application username that has been created in step 6. |
|
16 |
CUCM_APPLICATION_USER_PASSWORD |
Password for the CUCM Application User. |
|
17 |
CUCM_IP |
IP address of Call Manager |
|
18 |
DEPLOYMENT_PROFILE |
Profile that we want to use for Backend “CISCO” or “EFCX“ |
|
19 |
LICENSE_CHECKING |
false (Keep it false) |
|
20 |
|
No of days after which the streams will be deleted |
The below Env variables are only for UCCX.
|
Name |
Description |
|
|---|---|---|
|
1 |
CCX_PRIMARY_IP |
Primary UCCX IP address. e.g 192.168.1.33 |
|
2 |
CCX_SECONDARY_IP |
Secondary UCCX IP, e.g, 192.168.1.33 |
|
3 |
CCX_ADMIN_USERNAME |
CCX Admin username |
|
4 |
CCX_ADMIN_PASSWORD |
CCX Admin password |
The below Env variables are only for UCCE.
|
Name |
Description |
|
|---|---|---|
|
1 |
UCCE_IP |
UCCE IP |
|
2 |
UCCE_DATABASE |
UCCE awdb database name |
|
3 |
UCCE_USERNAME |
UCCE awdb database user’s username |
|
4 |
UCCE_PASSWORD |
UCCE awdb database user’s password |
KeyCloak variables no longer needed.
|
|
Names |
Description |
|---|---|---|
|
1 |
KEYCLOAK_REALM_NAME |
Realm name from EFCX Keycloak. Add “expertflow” |
|
2 |
KEYCLOAK_CLIENT_ID |
KeyCloak client id from EFCX Keycloak. Add “cim” |
|
3 |
KEYCLOAK_CLIENT_SECRET |
Copy it from Realm>Clients>cim>Credentials |
|
4 |
KEYCLOAK_URL |
FQDN of EFCX, e.g., https://efcx-fqdn.expertflow.com |
-
Add the Following environment variables for pause and resume recording.
|
|
Names |
Description |
|---|---|---|
|
1 |
ESL_HOST |
IP address of the Recorder Machine |
|
2 |
ESL_PORT |
Port of the Record where ESL commonly used 8021 |
|
3 |
ESL_PASSWORD |
Password of ESL |
|
4 |
REC_PATH_STREAMS |
Path where streams are saved, e.g, vrs/var/recordings/cucmRecording/streams |
Update the Database environment variables in config.env.
|
|
Name |
Description |
|---|---|---|
|
1 |
DB_DRIVER |
Driver on which database is running, i.e, postgres, mysql, or SQL Server driver |
|
2 |
DB_ENGINE |
Engine on which the database is running, i.e, PostgreSQL, MySQL, or SQL Server |
|
3 |
DB_HOST |
IP Address of the host on which the database is active |
|
4 |
DB_NAME |
Name of the database. In case of EFCX, it can be fetched from config.conf on this path: /etc/fusionpbx/config.conf |
|
5 |
DB_USER |
Username for the database. In case of EFCX, it can be fetched from config.conf on this path: /etc/fusionpbx/config.conf/ |
|
6 |
DB_PASSWORD |
Password for the database. In case of EFCX, it can be fetched from config.conf on this path: /etc/fusionpbx/config.conf/ |
|
7 |
DB_PORT |
Port of the Database |
Update some additional variables with respect to the multitenancy.
|
Name |
Description |
|---|---|
|
ROOT_DOMAIN |
For Multitenant Deployment-> Root Domain of the VRS ngnix, eg vrs.expertflow.com (value will be equal to tenantId for on-prem deployment) |
|
TENANT_URL |
CX Tenant url to fetch tenants, e.g., (https://tenant4.expertflow.com/cx-tenant/tenant)
|
|
CX_ROOT_DOMAIN |
Root domain of the CX Solution, eg, (expertflow.com) (value will be “NIL“ for on-prem deployment) |
-
For EFCX Deployment. the line
<param name="url" value="http://127.0.0.1/app/xml_cdr/xml_cdr_import.php"/>in the xml_cdr.conf.xml file should be uncommented. Run the following command to open the xml_cdr.conf.xml and uncomment the above line.
nano /etc/freeswitch/autoload_configs/xml_cdr.conf.xml
Once the file is open, then uncomment the line and save it. Then restart the Freeswitch. systemctl restart freeswitch
-
For EFCX Deployment profile: The root domain vrs.expertflow.com should be mapped to the IP address of the VRS server. Ask IT to configure this.
-
Each tenant’s root domain should be routed to vrs.expertflow.com on port 444.
For example, if we have two tenants — mtt01 and mtt02 — and the root domain is vrs.expertflow.com, then:
-
mtt01.vrs and mtt02.vrs should both point to vrs.expertflow.com on port 444.
This will make the VRS accessible at the following URLs:
-
In the case of a Single Tenant, simply map a FQDN to the Media Server IP on port 444 to access VRS.
-
-
Similar to the EFCX setup, an FQDN (Fully Qualified Domain Name) must be obtained from the IT team to access the VRS application with Cisco Deployment Profile.
The root domain —
vrs.expertflow.com— should remain static and be routed to the VRS Server IP. Tenant-specific subdomains will be created by prefixing the tenant name to the root domain. Each tenant subdomain must route to the CX Server IP (or the corresponding VRS Server IP if applicable).
https://mtt01.vrs.expertflow.com
In this configuration:
-
vrs.expertflow.com and mtt01.vrs → Points to the VRS Server IP
VRS with Cisco Deployment Profile is only supported for a Single CX Tenant.
-
To update the self-signed certificates for VRS, get the public authority or domain signed certificate .crt and .key files, name them server.crt and server.key, and replace the files in /recording-solution/config/certificates with these two new files. Names should be the same.
-
Navigate to the recording-solution directory and assign permissions to the install script.
-
chmod 755 install-cisco.sh #in case of CISCO chmod 755 install-efcx.sh #in case of EFCX chmod 755 install-replay.sh #in case of HA CISCO -
Run ./install-efcx.sh for EFCX or run ./install-cisco.sh for Cisco UCCX and UCCE.
-
Run the following command to ensure all the components are running.
# docker ps -
Once VRS is deployed with EFCX profile. Run this command to give the permission to recording directory
chmod 777 -R /var/lib/freeswitch/recordings/ -
In the case of Cisco, see Step #13 to access the application, whereas for EFCX check Step #12 for Accessibility.
-
For HA-specific deployment, proceed following steps.
Copy the config.env file and paste it into the Recorder 2 VM and Replay Server, as most of the environment variables are the same
-
Follow this guide for the creation of an rsync job on All VMs Recorder 1, Recorder 2, and Replay Server.
-
Go to the replay server, add permission to install.sh script and then run
./install-replay.sh. -
Configure two SIP Trunks on Cisco Call Manager in HA Mode and set priorities for both Machines.
-
Do the following steps for both recorder1 and recorder2.
-
Open
recoding-solution/docker/docker-compose-cisco -
Inside the docker-compose-cisco file, Uncomment the Consul Container.
-
In the
container_namevariable, set the name as consul1 for Recorder 1 and consul2 for Recorder 2. -
Add your interface card. This can be found using the ifconfig or ip address command.
- CONSUL_BIND_INTERFACE=ens192 #ens192 or end32
-
In the command section, set the name of the consul in the
-node=<any-name>as shown in the code snippet below. This name must be different than the second recorder. -
Set the
-advertise=<Local-Machine-IP>and set-retry-join=<IP-Second recorder>. Keep the other values as they are
# command: "agent -node=consul-106 -server -ui -bind=0.0.0.0 -client=0.0.0.0 -advertise=192.168.1.106 -bootstrap-expect=2 -retry-join=192.168.1.101"
-
Save changes and exit.
-
Run ./install-cisco.sh
-
Check containers on both recorders using the
docker pscommand -
Deploy the Recorder 2 in the same way.
-
Deploy the replay server. For the replay server, just add the config.env file. and run the
./install-replay.shcommand.