If you have more than five customer support agents and thousands of customer conversations per month, you may feel anxious about your lack of insight into the quality of all those interactions. Interaction reviews — a quality assurance process for customer support — can relieve that anxiety.
Your company is growing like wildfire, together with your support backlog. New agents are joining and you can’t afford to dedicate weeks of one-on-one training time to get them up to speed.
You have more and more interactions with customers, but you’re not sure if all of those interactions are of perfect quality. In fact, they could not possibly all be perfect, because 70% of the team started less than year ago. You measure CSAT, but negative ratings only seem to roll in when it’s too late and, even then, they aren’t always relevant. Besides, the customer can never know if your support is up to your quality standards.
Sound familiar? If it does, you have probably reached a point where a more formal review process should be established. Some call it “support quality assurance” or just “support QA.” At Qualitista we call it “ticket review,” because, in essence, it is very similar to code review in engineering (FullStory recently published a great post about that). The Help Scout team famously doesn’t call them tickets, and instead refers to conversations. Whatever you call it, it’s the process itself that matters.
What is an interaction review?
A support interaction review is the process by which an organization systematically evaluates the communication between its support team and its customers, in order to identify ways to deliver higher-quality service.
Reviews can vary in type (peer-to-peer reviews, manager reviews, QA agent reviews, self-reviews), frequency (daily, weekly, monthly), breadth (how many interactions are reviewed of total ticket volume) and complexity (informal verbal reviews, written reviews, using spreadsheets or purpose-built software).
There is good news in that having a simple but consistent routine can save you from those late night emails from the CEO with the subject line “Fwd: HORRIBLE SUPPORT EXPERIENCE.”
We have put together a 7-step practical guide to get your quality review efforts underway. Find the setup that is most suitable for the stage you are at as a team and your company culture. Remember — there is no universal right answer. But going through this guide, step-by-step, will help you find yours.
1. Understand your motivation – the problem you’re solving
Ask yourself why you want to review more conversations.
Is it too many replies that miss the mark completely or a general feeling that the way you talk to customers is a bit “meh”?
Is the quality of your agents’ responses varying wildly from person to person? Are you trying to improve your CSAT score? Are you trying to get a better baseline understanding of the support your clients are receiving? Are you worried that new agents are not getting up to speed fast enough?
Maybe you want to invest in your agents and help them develop as professionals, leading to better agent retention, conversion rates, and better service levels. Happy agents = happy customers.
When you know why you’re performing reviews, you can ensure that the process you choose will address the underlying problem.
2. Define your ideal outcome of the review process
What should you be able to say once you’ve successfully implemented an interaction review process?
For example, perhaps you would like to be able to say “no case falls through the cracks” or “every agent has up-to-date product knowledge and displays empathy in every interaction” or “we go above and beyond on a regular basis.”
Painting a detailed picture of your goal is another crucial step in making it happen. Once you’ve defined your problem in step one and established your desired end-state, you’ll know where you are now and where you want to end up. All that’s left now is the “small” matter of mapping your path and getting there.
3. Decide who will do interaction reviews
Based on your organization and specific needs, decide on the setup of the organization doing the ticket reviews.
A good rule of thumb is to lean toward one of the distributed reviewer setups (peer-to-peer and self-review) if you’re a smaller team, and one with more dedicated resources (review by team lead or dedicated person) if you’re a larger team.
Peer-to-peer ticket review
- A process where each agent reviews the work of other agents on the team. This usually involves setting a goal for number of conversations reviewed per agent per day/week/month and agreeing on basic guidelines on what to look for.
- Peer-to-peer reviews work very well in medium-sized teams with very open company culture where everything is shared with everyone and there is little hierarchy.
- Agents can learn from each other by doing reviews. It creates an open and collaborative culture and makes support feel more like a team sport, where everyone’s in the same boat working towards a common goal.
- More tickets can be reviewed with everyone pitching in.
- Know that different agents might review tickets differently and they can only comment on other people’s work based on their own unique perspective and mental rating scale. This means that you should know each reviewer’s average review score — that way you can address situations where certain people give consistently high or low scores compared to other reviewers.
- The other downside of such an approach is that reviewing tickets is nobody’s primary task and thus could be put off or done with less effort.
Ticket review by team lead
- The more traditional setup for doing ticket reviews is for the team lead or support manager to do it. This approach makes it easier to assure that agents get quality feedback and all agents are rated based on the same criteria by the same person in the same manner.
- Having one person or a few people (as opposed to most of the team) analyze oodles of data will inevitably lead to the person or small group spotting trends about the tickets. This would be unlikely to happen if every reviewer only sees a tiny fraction of the total volume.
- The obvious downside is that this approach will take up a lot of time for the people doing the reviews, so make sure the necessary time is allocated to them.
- The other drawback is that the feedback will only come from a single source, while your team could possibly benefit more from a wider spectrum of views and reviews.
- Of course, as the team grows in size, it may simply be too much work for an individual who has many other tasks to accomplish.
Full-time person dedicated to ticket review
- As your organization grows, it might make sense to have a full-time person (or even team) focus on quality and ticket reviews. This is very common within larger companies and institutions.
- The quality of feedback tends to increase, as specialization brings lots of expertise to the exercise. Google, for example, has organized their engineering QA like that.
- It might be difficult to build a business case for a separate person solely dedicated to ticket reviews, especially in a hyper-growth environment with a growing backlog.
Self-reviews would work in smaller teams, where agents have lots of responsibility for their work. Everyone is an adult and once you hire someone, you should have enough trust to allow them to do the right thing.
While self-review is not ideal, it’s important to keep in mind that any kind of review is better than no review at all.
4. How should you set up the review instance?
There are infinite ways to organize the review process and set up the review form. Our suggestion is to start simple.
A great starting point for setting up the review form is a written feedback section — actually, a free-form comment section could work even on its own. Written comments and feedback are the parts of the form that bring the most value to the agent.
As your organization grows, you might want to have something more quantitative. Here, you can introduce rating categories and a simple good/improvement needed next to them, which you can quantify to a percentage. You need to review a minimum of 5-6 tickets per agent over a time period to get some significance in the data; the more the merrier, of course. However, if you manage to give only a few reviews to each agent, be aware of how low quantities work, and don’t let it sound like “50% of your work is worthless” when they get one bad and one good review.
For coaching, nothing beats one-on-one meetings that should take place at least once a month. This is your opportunity to summarize and work through the review data together, making sure that all the good feedback sticks and doesn’t get lost in the ether.
5. Use the right tool to track reviewers and reviews
Have a single, easily accessible, and simple-to-use place or tool where you store and track the review data over time. This can be a simple Google Sheet or a more specialized ticket review software like Qualitista. And again, don’t make it complex, because otherwise the task becomes too daunting and will not be done.
Also, figure out how to tell if the reviewers are doing a good job.. Make sure you have a mechanism for your reviewers to get feedback on their work. It can be as simple as having someone outside the review team browse through all the reviews given in order to detect any inconsistencies or biases.
6. Document the process and communicate it to the team
Make sure you have the processes and what is expected of everyone written down somewhere for everyone to refer back to. Take extra effort in talking it through with all the involved parties before launching so that everyone is on the same page on why you’re doing this and how it will help the business.
Checking someone’s work or knowing your work will be reviewed by someone might feel intimidating and potentially negative at first. Creating a culture where this is the norm might take time. To get this off the ground initially, everyone must recognize the same goal.
7. Launch, test and iterate the process
Start simple to get a reasonable number of reviews done. Planning is great, but there are many things you can only learn by testing.
See what works and what doesn’t and iterate. For example, you can kick things off by starting reviews with a smaller group or with just one team out of many. You will get to know your team and how they actually work and what lies beyond CSAT, which some consider more of a vanity metric these days.
You will still need to iterate as your organization and your team changes, but you’ll not have to start from square one again.
One final — but important — reminder
While everything above is important for implementing a great conversation review process, there’s one thing that’s most important – a prerequisite to everything else:
You have to actually do the interaction reviews.
Elaborate processes are useless if you don’t do the work. It’s an easy trap to fall into, because adding complexity and tweaking processes feels like being a responsible manager, when in reality it’s creating barriers for reviewers. So when in doubt, put more emphasis on consistency.
As a Greek goddess once put it — just do it.
Measuring Support Quality: Beyond CSAT & NPS
Do you know how good your service is? Join Beth Trame from Google Hire, Shervin Talieh from PartnerHero and Aprikot.io, and Help Scout’s Mathew Patterson, for a live chat on how to tell if you’re really doing support well.
- Wednesday, August 22
3:30 p.m. ET