Now, we’re going to dig into perhaps the most critical part of survey creation:
Writing the survey’s questions.
First, we’ll talk about the types of questions you might choose to include in your customer survey. Then we’ll discuss why the way in which you word these questions is so important and go over some of the most common mistakes made when creating survey questions.
Let’s start with survey question types.
What Are The Types Of Survey Questions?
There are many different ways you can ask questions regarding your customer’s experience with your brand.
But, because you’ll want to set certain parameters for your respondents’ answers, you’ll want to ask certain questions in certain ways.
Here, we’ll discuss the types of survey questions you might choose to ask, as well as the possible responses you might include to accompany these questions.
1. Open & Close-Ended Questions
No matter what, every question you ask within your survey will either be open- or close-ended.
Close-ended questions provide specific answer choices for respondents to choose from.
Examples include:
- “On a scale of 1-7 (7 being the highest), how would you rate our customer service?”
- “Did you make your purchase in person or online?”
- “Of the following choices, which most influenced your decision to make a purchase?”
Close-ended questions are inherently quantifiable, in that responses to a single question can be tallied across a customer base to assess where the majority of a company’s customers stand on a certain issue.
Of course, close-ended questions don’t offer respondents the opportunity to clarify their answers. This is where open-ended questions come in.
Open-ended questions allow respondents to use their own words in their answers. Examples of open-ended questions include:
- “Explain what you liked best about our service.”
- “Why did you choose x as the most important factor in your buying decision?”
- “Was there anything the survey didn’t mention that you want to discuss? If so, explain.”
While open-ended questions can provide a more complete look into a customer’s experience, it’s best to use them sparingly.
On the respondent’s end, open-ended questions take much longer to answer – and they require more effort to complete, as well. Because of this, respondents often choose not to answer open-ended questions.
On the surveyor’s end, responses to open-ended questions aren’t quantifiable – which means they take much more time to analyze and understand. Whereas responses to close-ended questions can be quickly categorized (and can even be done automatically via survey software), open-ended responses need to actually be read by the surveyor in order to be understood.
That being said, open-ended questions are best used to accompany close-ended questions, rather than to stand by themselves. For example, you might choose to provide respondents the option to explain their answer to a given question. Again: make it clear that responding to these open-ended questions is optional; otherwise, you run the risk of overwhelming your respondents, leading them to abandon the survey altogether.
2. Single Answer Versus Multiple Answer Questions
For these questions, respondents will only be able to choose one answer. The answers to these questions are typically either opposites of each other or scaled responses. For example, for the question “Was your most recent purchase made online or in-person?”, only one of the answers can be true. Or, if a question asks respondents to rate the company’s checkout process on a scale of 1-7, they would need to choose a single rating.
Another possibility would be a question in which respondents are asked to choose what they consider to be the most important aspect of the company’s service.
In contrast, multiple-answer questions (often called “checkbox questions”) are those which, naturally, allow respondents to choose more than one answer.
For example, you might ask your customers how they’ve interacted with your company in the past, providing answers such as “Social Media,” “Website,” “Storefront,” etc. Respondents who have engaged with your brand in more than one way would then be able to choose every answer that applies to them.
To further illustrate the difference between the two, the above question in single-answer form would be “Which channel do you use most when engaging with our company?” In this case, only a single answer would provide the information the surveyor is looking for.
3. Forced Versus Neutral Questions
The way in which answer options are provided for certain survey questions can do one of two things:
- Force respondents to “pick a side”
- Allow respondents to choose a neutral answer
give their opinion on a certain topic. Forced-response questions will provide an even number of choices, while neutral-response questions provide an odd number.
Note that, unlike in the example mentioned above, the wording of the question need not change.
Consider the survey question, “Agree or disagree with the following statement: “Our customer service department was helpful.”
Forced-response options would be as follows:
- Strongly disagree
- Disagree
- Agree
- Strongly agree
On the other hand, neutral-response answers would be as follows:
- Strongly disagree
- Disagree
- No opinion
- Agree
- Strongly agree
Either option, of course, has its pros and cons.
Clearly, forced-response questions force respondents to decide: was the customer service helpful or not?
On the one hand, this can mitigate instances in which respondents wish to choose the neutral option in lieu of skipping the answer entirely, or in which they truly want to answer negatively but perhaps don’t want to “ruffle any feathers.”
On the other hand, perhaps respondents truly were indifferent to the customer service they received – and now have to choose a side whether they actually believe their choice or not. Of course, they could always skip the question altogether – which provides you with absolutely no information at all.
However, it’s also important to realize that a neutral answer is not a null answer. In other words, there’s a lot to glean from a neutral response.
Think about the customer who reports having “no opinion” on how your customer service department affected their experience. While it doesn’t seem as if anything’s going wrong in this area, there isn’t much to celebrate, either. In other words, a neutral response can be a sign that you have room for improvement in a certain area if you want to be able to “wow” your customers.
A Quick Note On Net Promoter Score
Net Promoter Score measures a customer’s propensity to recommend your services to people in their network.
NPS is derived by the following steps:
- Asking customers the question: “On a scale of 0-10, how likely are you to recommend our brand to your friends, family members, or colleagues?”
- Defining Promoters, Passives, and Detractors using the following criteria: Response of 9-10: Promoter Response of 7-8: Passive Response of 0-6: Detractor
- Determining the percentage of responses defined as Promoters and Detractors
- Subtracting the percentage of Detractors from the percentage of Promoters
The Net Promoter Score (NPS) shows your brand’s perceived value, as well as how you stack up against other companies within your industry.
Mistakes To Avoid When Writing Customer Survey Questions
As we’ve alluded to throughout this (and previous) sections, the way in which you word your survey questions can influence your respondents’ answers heavily.
Your wording can also nullify their responses entirely.
We have already discussed the types of survey questions at length before, but let’s briefly review some of the most common (and detrimental) mistakes that can be made when writing survey questions.
1. Double-Barreled Questions
Double-barreled questions are those that mention two topics at the same time, creating a dilemma for respondents.
For example, consider the question (or rather, statement) “The checkout process was quick and easy.” For the respondent to “strongly agree” with this statement, the checkout process would have had to be both quick and easy for them. If it was easy, but it took more time than expected, the respondent’s truthful answer should be “strongly disagree.”
On the surveyor’s end, there’s no way to understand the meaning behind a negative response to such a question. Was the checkout process quick but not easy? Was it easy but not quick? Was it slow and difficult? It’s impossible to tell without reaching back out to the respondent – rendering the initial survey’s results moot.
Double-barreled questions can easily be fixed by simply breaking them into two questions. This ensures your respondents won’t be confused by the question, and that their answers will address one topic only.
2. Leading Questions
Leading questions, intentionally or not, make respondents feel as if there’s a certain “right” answer to the question at hand.
In the “real world,” leading questions are used all the time, like “You don’t want to miss out on a great deal, do you?” or “That was a great meal, don’t you think?”
Those are pretty obvious examples of leading questions that are pretty easy to avoid when creating survey questions.
But leading questions can be much more subtle than that. For example, the question “How high would you rate our customer service?” plants a seed in the respondent’s mind that the service was high in quality, and it’s just a matter of how great the service was.
You can fix a leading question by removing any semblance of quality from it. Using the previous example, you’d simply ask: “How would you rate our customer service?” You’d then provide a Likert scale, defining 1 as “Very Poor” and 7 as “Very Good.”
3. Loaded Questions
A loaded question makes an assumption about the customer’s experience without clarifying such, then asks a question based on this assumption.
For example, the question “How would you define your interaction with our customer service team?” makes the assumption that the respondent, indeed, interacts with the company’s customer service team.
In such instances, respondents might skip the question, or they might simply choose the “neutral” or “not applicable” option. But, going back to what we talked about earlier, this doesn’t give the surveyor much information. Is the respondent saying they have no opinion about the quality of service the team provided? Or did they not interact with the team at all? Again, there’s no way to immediately tell either way.
To avoid loaded questions, you’ll first need to ask a qualifying question (e.g., “Did you engage with our customer service team?”). Then, depending on the respondent’s answer, you can use branching or skip logic to either ask a follow-up question regarding the quality of service provided or move on to the next question.