At Zendesk, we believe that our customers’ feedback helps us improve and grow as a support organization. We ask for feedback after each support interaction by sending a customer satisfaction (CSAT) survey. Replies to this survey constitute our CSAT score, which we use as a measure of how we’re doing as a support team.
However, this isn’t where it ends: comments left in response to the survey may translate into product improvements or trigger reviews of processes and policies. We want to hear what customers have to say—be it positive or, not so much.
Today, as part of our Zendesk on Zendesk discussion series, I’ll shed some light into how we navigate the intricate workings of the CSAT survey and ratings, including a newer Zendesk feature that lets you drill into the reasons for bad satisfaction ratings.
Our discussion is broken into several sections, including:
- A general overview of the CSAT survey we send out, and how we use tags and automations to avoid sending duplicate surveys for a single ticket
- Surfacing feedback outside of our support organization
- How to add a second question to the survey to understand the reason behind bad satisfaction ratings better
Read the full post in the forums, ask questions, and tell us how you collect and manage customer satisfaction