The human approach to building better customer satisfaction surveys
Last updated December 23, 2020

When we think of data, we think of numbers. Taken as a whole, data sets can seem incomprehensible, maybe even threatening. After all, analyzing those numbers may deliver results that are starkly good or bad. And you might worry that you’ll never be able to decipher the story behind what those results are telling you.
Relax. It’s normal to feel intimidated by something you don’t work with all the time. The good news is that when you think about what data sets represent—people answering a question you’ve asked—data sets become a lot more approachable.
Every data point is just a person performing an action of some kind. Buying a car. Returning a sweater. Trialing software but never purchasing. Data are also the people who used to shop online, but suddenly stopped. Or the folks that only visit your store on a rainy day. When you think about data this way, as a group of humans performing an action, it becomes easier to understand the motivations behind their behavior.
While there are many ways to collect data for analyzing things like customer satisfaction, surveys are one of the most common methods. When designed well, they seem deceptively simple. In reality, however, careful consideration goes into each part of the survey experience. Great surveys (and accurate results) factor in the human element from the very beginning and throughout each phase of design and implementation. So how does one design a human-centric survey?
Design using common language
When you sit down to design a survey, one of the first things to consider is: How will the respondents interpret the question? Have you used language they’ll understand? Customers probably won’t recognize industry-specific terms (think: in the auto industry, questions about “fit & finish”) and aren’t considering your business in terms of the “brand experience.” They’re thinking of the human experience. They’re thinking about their experience. A better way to phrase a question about this experience would be, “How courteous was our staff during your last visit?” And if you’re really wanting feedback on brand experience, consider asking: “What words come to mind when you think about the Acme brand?”
Wherever possible, use language that everyone defines and interprets the same way. When it comes to survey design, it’s also important to think backward from the expected or hoped-for end result. Ask yourself a few necessary questions:
- What information do I need from this survey?
- How will this information be presented to the powers that be?
- What would I put on one Powerpoint slide to represent the results?
- How will my organization take action based on the data collected?
The answers to these questions might make the difference between asking a more general question about overall satisfaction and asking a more targeted question about how a particular product or policy has affected customer satisfaction.
The context matters during implementation
Everything that customers are doing immediately before and during a survey impacts their response. Surveys are typically delivered on the spot—in person or via a pop-up screen—or as an email follow-up. One generally asks a customer about an overall experience via email, and keeps questions about a singular moment in time closely tied to the interaction of interest.
Naturally, there are reasons for this. If you ask a question about product or brand happiness directly following a bad experience, a person’s response will likely reflect the bad experience. In reality, your respondent might be a mostly happy and loyal customer. That’s why asking this question later, by email, helps avoid what’s known as the “recency effect”—where people are more likely to better recall the most recent interaction rather than their overall experience.
Of course, even email can be affected by what’s happening in a person’s life, and there are many factors in a person’s experience or environment you can’t account for.
A measure that appears during or directly following a transaction should be limited to transactional questions like, “How knowledgeable was your sales associate?” That’s because the closer the timing of a general satisfaction question is to a specific interaction, the more skewed the response data will be.
Analyze with your population in mind
Taken together, the data you collect represent a community of customers. Before rolling out your survey, consider how well your respondents represent your entire customer base—and whether that even matters. For example, if a grocery store wanted to measure customer opinions about a new brand of yogurt, then they’d want to filter out customers who primarily purchase vegan products. But if you’re a software company measuring overall customer satisfaction, and 70 percent of your customers are small businesses, then the ideal sample will reflect that.
Keep things simple. Only measure who and what you need to. If you already know that men and women rate customer satisfaction the same, then you don’t need to worry about equal gender representation. But if you know that large businesses typically rate satisfaction lower, and you haven’t included any, then you’ll likely artificially inflate your results.
Reporting gets personal
Here’s where we go back to the beginning. When you designed your survey, you were thinking of your surveyed customers, as well as the leadership interested in the results. Reports are a lot more effective when they’re not just a slide deck of pie charts about particular customer segments or categories. At this phase, you want to bring in specific examples of the people you surveyed. The more personalized the data, the easier it will be to see how you can improve the customer experience.
Wondering how to tell a data story? Choose a particular data point (yes, a person) and tell their story. Take someone you have free-form feedback from, and talk about what that person does for a living, and what their buying or behavioral patterns are.
Don’t overreact—the sky isn’t falling
This is where we like to say, “don’t Chicken Little it.” Chicken Little is the story of a chicken that gets hit on the head with an acorn and overreacts, running around yelling, “The sky is falling! The sky is falling!” When it comes to reacting to the data you’ve collected and analyzed, don’t jump to conclusions based on the numbers alone.
Why? Because data are people and people are dynamic. People change. Thus, data points might look very different over time. To get the best sense of what the data are really telling you, you need to collect data over time. Vary the settings, as well as the samples. As in any relationship, it’s more about the long game than just a point in time.

Zendesk Customer Experience Trends Report 2020
Discover how top companies provide experiences that keep customers returning and the best practices that separate the leaders from everyone else.
Zendesk Customer Experience Trends Report 2020
Discover how top companies provide experiences that keep customers returning and the best practices that separate the leaders from everyone else.
Download now