Front-line agents/reps are encouraged to get involved whilst giving the business additional insight
The pain:
Agents are disengaged with Quality. They view the department as policing body who always finds fault rather than a support team which is there to help individuals reach their potential and have better conversations with customers.
As a result - operational directors, call centre managers, team leaders and supervisors struggle to motivate their team.
From both sides frictions grow too:
Agents:
-
- Agents disregarding feedback and scores
- Agents using phrases such as "Watch out, hear come the quality police"
- Agents challenging the data "you only picked my bad call"
QA teams
-
- I keep sharing the same feedback time and time again
- I'm not sure if the Agents have even read my feedback
- Evaluation outcomes get challenged informally
Root-cause:
- Calls are manually selected. This leads to the possibility of selection bias which is then used to challenge the outcome.
- QA teams don't involve other stakeholders (Agents or Team Leaders) in the creation of the Quality Scorecards - or provide access to the guidelines. As a result Agents don't know what the expected standard is...and simply see a "score"
- What data, and how it is presented to Agents is often an afterthought. Consequently they don't understand or engage with it.
- Agents have no access to their own performance data. Without it - they aren't sure how they are doing, how they've improved or outstanding actions they've previously agreed to.
Impact:
-
Agents become disengaged and demotivated with efforts to improve their performance.
- Mistakes continue to be made leading to a poor CS & Low CSAT scores
- Ends up in a vicious circle of performance issues leading to exiting the business at the businesses expense. Higher Staff Turnover
- Top performers themselves become demotivated as there no sense of progress or achievement.
Evidence statements:
- By engaging Agents, Pinnacle were able to regularly achieve a 95% Quality Score. The subsequent quote included:
"I can't recommend this platform enough. Before EvaluAgent, we had a vision for how Quality Assurance should work but didn't have the tools to accomplish it. Now our team is focused on supporting our front-line teams to improve rather than debating the accuracy of data or spending days compiling multiple spreadsheets." - PInnacle
Audio snippet from case study recording talking about the success of the "Agent Appeals" process
https://otter.ai/s/jtxJLCoUT46zBFFpLV6SRA?snpt=true
Features to show
- Evaluation process.
- Work-Qs randomly select conversations for greater fairness and less bias
- Guidelines give agents access to the expected standards
- Completed evaluations (shared with the request to acknowledge (tracked too) + option to dispute the score through a structured process.
- Agent Leaderboard Insights (shows an Agents position in the organisation)
- Request a meeting (from an Agents perspective)
- Evaluator Performance Report (Make sure evaluators are scoring consistently + track the number of score amendments due to appeals)
You could also talk about involving Agents in the calibration session as well as creating a self-score mode so they can evaluate their own calls prior to a coaching session.