6 steps for measuring self-service success

6 steps for measuring self-service success

April 20, 2016
6 steps for measuring self-service success

When I started as the Technical Documentation Manager at RJMetrics, my main focus was to get our Help Center in tip-top shape and to build a better self-service experience. This meant I needed to assess how content was written, organized, and structured, as well as to identify the target audience for each content type. I found that our Help Center hadn’t been a priority in the past, and as a result, it suffered from some familiar problems: poor organization, inconsistent voice, and outdated content.

After identifying these pain points, I put together a documentation strategy and content structure. Here are the six steps we found worked for building a robust self-service experience.

Step 1: Set goals for your Help Center
Our goals are probably similar to most organizations’ goals when it comes to a self-service offering, though it may be that your goals differ.

  1. Improve the overall user experience. Allow clients to quickly and easily find what they’re looking for, either by browsing or searching. Improving the content structure will help to accomplish this.
  2. Increase engagement, comprehension, trust, and value. Engaged users are empowered users. Empowered users value and understand the product, how to best leverage it, and can competently troubleshoot issues when they arise.
  3. Increase internal margins by reducing onboarding and training support. Self-service is mostly about the customer, but it also needs to be about your business goals too. For RJMetrics, using our Help Center to reduce onboarding tasks meant that we’d have more time and resources for more forward-facing tasks such as driving more analysis and improving internal processes.

Step 2: Choose your success metrics
Next, it’s time to determine the metrics that will measure the success of each of your defined goals. I was interested in figuring out how to prevent ticket submission and to empower users to help themselves.

To identify the metrics I needed to track, I started by defining the questions I wanted to answer:

  1. How many of our clients are using self-service? To track self-service attempts, we used some standard website metrics including unique visitors, pageviews, and visits.
  2. Are our users servicing themselves successfully? This question is probably the most important. Even if everything else looks healthy—high engagement rates, search clicks, and number of visitors—if a large number of users continue to submit tickets, then alarm bells should sound. To figure this out, we calculated our Self-Service Score (SSS) by dividing the total number of unique visitors that interacted with help content by the total number of unique users with tickets. We defined ‘content interaction’ as someone who did more than just visit the Help Center landing page or navigate straight to a new ticket form. This allowed us to get a better idea of how many visitors were actually trying to self-serve before submitting a ticket. We compared our results to the average—which is 4:1 according to the Zendesk Benchmark Report for Q2 2013.
  3. How engaging is our content? Tracking user engagement makes identifying areas for improvement easier. It also allows you to analyze how users move through the Help Center. For this, we drew inspiration from Snowplow’s cookbook and used standard web engagement metrics: unique visitors, session length, number of visits, number of pageviews, percentage of page read, and so on. Then, using these metrics, we analyzed how long users spent in the Help Center, what content attracted the most visitors, how much of each article was being read, and whether that content was referenced more than once.
  4. Are users finding what they’re searching for? Search health ties into increased engagement and self-service success; users who utilize search are engaged and attempt to serve themselves. To measure our own search heath, we wanted to know how many people were using the search feature and how many of those searches resulted in a click-through. However, we realized that measuring usage and clicks only told us half of what we wanted to know. To get the full picture, we also had to know what users were searching for. And thus, we also analyzed top search terms.

Step 3: Retrieve the data
Once you’ve identified the data you need, you have to actually get your hands on that data. For our use case, we utilized two data sources:

  • Snowplow, which collects all of our web data
  • Zendesk, which contains all of our customer support data

Most of the analysis outlined in the previous step was single-source analysis, with the exception of the Self-Service Score. To calculate that, we needed to be able to write a SQL query that would join Snowplow data to Zendesk data; Snowplow replicates web data to Amazon Redshift. Next, we used our product, RJMetrics Pipeline, to stream our Zendesk data into Redshift. Once all your data is in a central data warehouse, doing cross-domain analysis becomes pretty straightforward.

Step 4: Build a reporting dashboard
It’s important to build all these metrics into a dashboard you can use on an ongoing basis. At RJMetrics, we use Mode, which has a pre-built integration with Amazon Redshift. Read our complete guide for a detailed look at our analytics stack for this project, and for the exact SQL query we used.

In the end, we’re pretty happy with how our dashboard turned out. It’s organized around the four questions listed above.

RJMetrics Help Center Reporting
Step 5: Take action following your analysis
Your Help Center dashboard will give you a solid picture of how your self-service content is performing, as well as how that performance impacts your overall support load. Since we started using our dashboard, we’ve seen some fantastic results including:

  • Using top search terms to surface new content ideas
  • Identifying content with a low “average percentage read” for improvement or removal which, in our case, led to completely revamping our FAQs

Step 6: Make ongoing improvements
Of course, the work is never done. We’re still figuring out how to define and refine some of the metrics we’re using, and to determine whether we need additional analysis. We question whether we ought to exclude certain ticket types from our total ticket count, and how we should define unique visitors. It’s also worth asking: Are there any blind spots? A dashboard will allow you to make data-driven decisions about content, but it might be that doing some additional analysis will help your workflows.

If you provide any kind of self-service support to your customers, keeping an eye on the performance of your content is just as important as focusing on how support requests are managed. Your Help Center is an investment in your customers and in your business.

For more detail, read the complete Six-Step Guide to Robust Self-Service Metrics. Also check our their on-demand webinar, How to use Feedback Surveys to Improve Customer Retention, for best practices on surveying your customers in Zendesk.

Today’s guest post features the data and writing chops of Erin Cochran, Technical Documentation Manager at RJMetrics, a company that builds data infrastructure and analytics software to help businesses make smarter decisions with their data.