Datasheet
Feature | Description |
---|---|
Evaluate every customer interaction for compliance, sentiment, language clarity, and KPIs using a configurable hybrid AI–human model. Set the percentage of evaluations handled by AI versus human evaluators to fit your QA strategy. This calibrated system delivers consistent, bias-resistant scoring, supports scalable quality assurance, and aligns with modern contact center practices. | |
Multi-Channel QM Capability | Streamline quality evaluations across all interaction channels (voice, chat, email, etc.) within one unified system. The Multi‑Channel Support allows consistent performance assessments across all touchpoints. |
Filter conversations using agent, team, date/time, wrap‑up code, direction and sentiment via the Conversation List. This enables Quality Managers to quickly identify relevant conversations and focus evaluation efforts where they’re most needed. | |
Scheduler | The Review Scheduler allows Quality Managers to automate one-time or recurring evaluations using configurable rules. Evaluations can be scheduled based on agent groups, interaction metadata, and selected evaluation forms. This streamlines review distribution, minimizes manual workload, and ensures timely, consistent quality assessments. |
Generate five key reports to track evaluation activity and compare performance across agents, teams, and evaluators:
These reports help identify performance trends, skill gaps, and evaluation consistency—supporting team calibration and continuous improvement initiatives. | |
The Review Screen is a centralized workspace for viewing, accessing, and managing evaluations. It supports both Quality Managers and Evaluators with role-based access to relevant evaluator and allows users to filter, track, and initiate evaluations directly from one place. | |
Single Pane View | Access full conversation content, including both conversation activities and conversation data in a single, structured interface. The Conversation View component enables Quality Managers and Evaluators to review assigned interactions, complete evaluations using linked forms, and monitor agent performance with context-rich data. |
Define the performance threshold that triggers alerts for low-scoring evaluations. This ensures that Quality Managers are promptly notified of conversations requiring immediate attention. You can manage how low-scoring evaluation alerts are sent to Quality Managers: either in real-time for individual reviews or as bulk summaries at set intervals, accessible via a dedicated configuration tab. | |
Introduces a custom Python-based ETL (Extract, Transform, Load) tool enabling users to extract data from various sources, perform transformations, and load the results into different targets. This is designed for easy deployment, customization, and orchestration, making it ideal for scalable data workflows. | |
Create fully customized evaluation forms, assign weighted sections—like communication, compliance, and resolution—and enforce validation to ensure scores total 100%. Form Builder lets you standardize how interactions are assessed, minimize bias, and align with your organization’s performance goals. | |
Integrated Call & Screen Playback | Evaluators can play voice and screen recordings linked to agent–customer interactions directly within the Conversation View. The player supports basic playback functions, enabling evaluators to evaluate conversations and on-screen actions in one place. |