Chances are, you’ve recently taken a survey. People are being surveyed now more than ever before. Were you frustrated by any of the survey items? Did you feel limited or overwhelmed by response options? Rae Jean Proeschold-Bell, associate research professor and founding director of the Evidence Lab at the Duke Global Health Institute, recently gave a talk about best practices for writing survey items based on research by Stanford University professor Jon Krosnick and other survey experts. Here are five key takeaways from her talk.
1. Write Questions with the Answering Process in Mind
Answering a survey item, Proeschold-Bell says, is actually a five-step process. Respondents read the question, figure out what it’s trying to assess, search their memory for relevant information, integrate their thoughts into a single judgment and translate that judgment to the best response option on the survey. Keeping these steps in mind while writing questions not only makes the respondent’s job easier, but also increases the likelihood of an accurate response.
Survey respondents typically fall into one of two categories: optimizers and satisficers. Optimizers are respondents who are motivated and able to complete the survey and who put in work for all five steps of the answering process. Satisficers complete the survey less carefully—typically providing low-quality data by either responding too neutrally or by not reading the question closely. People typically “satisfice” due to task difficulty, respondent ability and respondent motivation, so these are key factors to consider in survey item design.
“You have to assume that everyone is going to be a satisficer,” said Proeschold-Bell, “so you need to make all the steps as easy as possible.”
To help ease the respondents’ burden, Proeschold-Bell recommends designing surveys with simple, concrete words, as well as consistent words and syntax. Response options should be exhaustive and mutually exclusive. Double negatives, leading questions and “double-barreled” items that touch upon more than one issue should be avoided.
2. Make it Easy for the Respondent to Agree or Disagree
When I look at the world, I don’t see much to be grateful for.
Consider the problems respondents may face when answering this item. The three “disagree” response options, when combined with the word “don’t” in the question, introduce a double negative that can confuse respondents. Also, it’s unclear whether the item is trying to get at how grateful the respondent feels or how often the respondent feels grateful. Now, let’s take a look at an improved version of the same item:
When you look at the world, how grateful are you?
Not at all grateful
The rewritten item eliminates both the double negative and the confusion about how grateful the respondent feels versus how often he or she feels grateful.
3. Minimize Rating Scale Confusion
When designing a rating scale, it’s critical that each point is unique and means the same to both the researcher and the respondent. Proeschold-Bell encourages survey writers to carefully consider whether people make fine-grained distinctions about the construct before creating a rating scale with many options. Rating scales with five to seven response options are most likely to be reliable and valid and yield quality data.
And what about that nebulous “neutral” option? If you expect to have satisficers among your respondents, Proeschold-Bell says, it’s best not to include a neutral option. On the other hand, a neutral option can be good for optimizers. (And remember Proeschold-Bell’s earlier advice: You have to assume that everyone is going to be a satisficer.)
4. Carefully Order Every Aspect of Your Survey
The order of a survey and options for each question can have surprising effects on the results. For a self-administered survey (either print or online), respondents are most likely to choose options listed first, whereas for a survey given verbally, respondents tend to choose options listed last.
Items toward the end of self-administered surveys tend to be subject to more “satisficing.” “Because of this, I always put the demographic information at the end,” said Proeschold-Bell.
Proeschold-Bell also points out that optimizers improve accuracy as they progress through the survey. Since people learn about what the surveyor is trying to discern as they answer questions, responses closest to the end tend to be most accurate. Grouping questions by topic can aid in this learning and improve response accuracy.
5. Test Your Survey before Distributing It
Proeschold-Bell emphasizes the importance of pre-testing a survey. She recommends reviewing the survey with someone similar to your intended respondents to determine whether any questions may be confusing or unclear. Another helpful resource is the Question Understanding Aid tool, an online application that gives feedback on reading level and precision of words used in survey questions.
The most useful pre-testing tool, according to Proeschold-Bell, is cognitive interviewing, in which the researcher gives a sample respondent an open-ended prompt and asks them to think out loud. This is a great way to see where respondents struggle in each of the five response steps. For example, they may have a hard time mapping their answer onto the existing response options. The researcher can also ask follow-up questions about specific words used, determining what words such as “stress” or “happy” might mean to a respondent. Sometimes words that mean one thing to researchers mean something else to respondents. The researcher can also see how long items take and conduct a respondent debrief to collect any additional feedback.
The bottom line? Creating effective survey questions is tough, but so is answering them. For best results, make every effort to ease the burden on your respondents.
Want more? Watch Proeschold-Bell's talk.
When designing a rating scale, it’s critical that each point is unique and means the same to both the researcher and the respondent.