Hardware Requirements: Large Deployment (500+ Agents)
This profile demands maximum performance, robust High Availability (HA), and a comprehensive Disaster Recovery (DR) strategy, often spanning multiple geographic locations or availability zones.
Deployment Prerequisites
See CX Deployment Prerequisites for detailed platform, operating system, and network prerequisites required before deploying Expertflow CX.
Core Platform (Mandatory HA/DR)
The environment must be sized to prevent any single point of failure and handle high throughput across all digital and voice channels.
Component | Purpose | Min. Instances/Nodes | Recommended Specs (per VM/Node) | HA/DR Strategy |
CX-Core Cluster | Unified Routing, Agent Manager, Persistence, Analyzer | 5+ Nodes (3 Control, 2+ Worker) | 12 vCPU, 24 GB RAM, 300 GB+ SSD/NVMe (≥15000 IOPS) | Full HA/DR: Cluster distributed across 2+ physical sites/zones (Active/Active or Active/Standby). |
Databases | MongoDB, PostgreSQL (for CX-Core) | 3 Replicas (Quorum) | Dedicated, high-IOPS storage and significant RAM. | Robust replication strategy (e.g., ReplicaSet with Quorum) with a separate DR site. |
Data Pipeline Orchestrator | High-Volume ETL and Data Integration (Apache Airflow) | 2+ Worker Nodes | 8 vCPU, 16 GB RAM, 250 GB Disk | Mandatory Component: Requires redundancy and horizontal scaling for heavy, parallel ETL jobs. |
Voice and Video Platform (High Volume)
Voice components must be geographically dispersed and capable of horizontal scaling.
Component | Purpose | Min. Instances/Services | Recommended Specs (per Instance) | Scaling Notes |
Media Server | Voice Processing, Recording, IVR | 4+ System Services | 12 vCPU, 24 GB RAM, 300 GB SSD/NVMe (≥15000 IOPS) | High Scalability: Scale one Media Server for every 150-200 concurrent calls. Requires high-speed, low-latency network connectivity. |
Jambonz (Voicebot) | Voice stream forking to ASR/TTS/NLU/LLM | 4+ Sets of 3 VMs (Distributed HA) | Scale up vCPU/RAM as required by concurrent bot usage (e.g., 8 vCPU, 16 GB RAM). | Must be deployed redundantly across HA sites and scaled horizontally for bot throughput. |
Add-on Components (Optional)
Component | Purpose | Min. Instances | Recommended Specs (per VM/Node) | Scaling Factor |
WFM (Workforce Management) | High-Volume Scheduling/Forecasting | 2+ VMs/Nodes | 4-8 vCPU, 16-32 GB RAM, 150 GB Disk | HA mandatory. Scale based on number of agents and required forecasting complexity (300+ agents). |
Surveys and Campaigns | Outbound Campaigning and Surveys Backend | 2+ VMs/Nodes | 4 vCPU, 16 GB RAM, 50 GB Disk SSD/NVMe (≥10000 IOPS) | HA mandatory. Scale based on volume/velocity. |
Third-Party AI/LLM Components
The dedicated component for AI engines is critical for large deployments due to latency and throughput requirements.
Component | Purpose | Min. Instances | Recommended Specs (per VM/Node) | Key Requirement |
Third-Party AI/LLM Engines | ASR, TTS, NLU, LLAMA (Agent Assist, Autonomous Bots) | Dedicated Cluster (Consult Vendor) | High core count CPUs, often NVIDIA GPUs (e.g., Tesla/A100/H100) | GPU Acceleration is mandatory for many modern LLM engines to meet low-latency requirements for real-time speech processing. |
💡 Summary for Presales
The requirement for a large enterprise is a multi-site, load-balanced deployment with 5+ nodes, high-throughput dedicated storage, and significant capacity allocated to Media and AI components. Latency across sites must be below 10ms for optimal performance.