Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
Services for Technology Vendors
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
Ventana Research’s benchmark research into agent performance management shows that most companies recognize the vital role contact center agents play in creating good customer experiences and thus good business outcomes. The research also shows that only the most mature companies have put in place processes and metrics that encourage behaviors that deliver such business outcomes. Furthermore, the research shows that companies are held back from adopting more customer-related metrics because they don’t have performance management tools that can help them create such metrics; instead most rely heavily on spreadsheets. Thus I was encouraged to hear during a recent roundtable discussion sponsored by Merced Systems from two customers that have used the Merced Performance Suite to institute a more rigorous, metrics-driven approach to improving agent performance.
The speakers indicated that a company cannot significantly improve contact center performance solely by deploying new technology. Rather, it can reach its ultimate goals only by changing processes and people – mainly by training and coaching agents. To do this in an effective manner requires a deep analysis of agent-by-agent performance and a system that points managers and supervisors to areas that need improving. Without this individualized analysis, training and coaching tends toward a “one size fits all” situation that doesn’t address individual agents’ needs. Companies therefore need to adopt a system that suggests which calls evaluators should listen to so they can quickly identify areas of weakness – for example, some agents may perform poorly during the greeting, or fail to give callers the required compliance information.
Another important message from the discussion is that companies must review their key performance metrics regularly and modify them to better reflect the organization’s business goals and desired outcomes. This often is not done: in our benchmark research into contact center analytics the number-one performance metric in the contact center is average handling time, which doesn’t connect directly to the most important metric to executives, which is customer satisfaction scores.
Both speakers were adamant that managing to averages doesn’t work and said that companies would do better to focus on the best and worst performers: the first to set goals that others should aspire to, and the second to assess where the most training and coaching is needed. It is also important to manage to trends; that is, a metric by itself is of limited use, but implementing training and coaching to reverse a trend or improve performance is likely to be effective.
This led to a discussion of key experiences with the performance management application. First and foremost, it needs to be widely adopted, which one of the speakers admitted did require a little “encouragement” for some reluctant supervisors and agents. The key to adoption is that everyone trusts the outcomes and they be consistent. This way users don’t feel Big Brother is watching for ways to take away performance-related pay, but that supervisors are honestly looking for genuine ways to improve performance. Sharing performance information with everyone, subject to some confidentiality restrictions, can produce an environment where everyone is trying to improve their own performance.
Finally, the speakers insisted that any program must be a continuous improvement process. Despite expressing pride in their processes and agents, they acknowledged room for improvement, which can be brought about only by more targeted coaching. One company thus implemented a closed-loopmetrics-driven quality monitoring process that uses analytics to identify areas where agents need to improve, targeted coaching to address those issues and trend analysis to ensure that the coaching is effective.
Do you use any form of analytics to drive your quality monitoring or performance management processes? If so, please tell us about them, and come collaborate with me and discuss your efforts.
Regards
Richard Snow – VP & Research Director
ISG Software Research is the most authoritative and respected market research and advisory services firm focused on improving business outcomes through optimal use of people, processes, information and technology. Since our beginning, our goal has been to provide insight and expert guidance on mainstream and disruptive technologies. In short, we want to help you become smarter and find the most relevant technology to accelerate your organization's goals.
Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business,
Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@isg-research.net