Editor’s note: Pete Cape is global knowledge director in the London office of Shelton, Conn., research firm SSI.

In the absence of an interviewer, an online survey participant takes his or her instructions from one of three places when deciding if a question demands only one answer or allows many:

  1. The question text itself: It can often be clearly implied whether the question is a single (requiring just one answer) or a multicode (requiring one or more answers) just from the text. Clear, singular words (such as “which one…”) or singular forms (“reason” instead of “reasons”) should tell the participant what to do.
  2. Visual cues: radio buttons for single choices and check boxes for multiple selections. These cues originated with printed surveys but online survey design programs generally use this convention.
  3. The explicit instruction, which is usually placed after the question text.

SSI has conducted research around the impact of different approaches to survey instructions on the quality and volume of responses given.

The first experiment phrased the question in the singular (main motivation), had no explicit instruction and used check boxes for the answer list:

S5 What is your main motivation in participating in sports or physical exercise?

In this group four in 10 respondents gave multiple responses and 57 percent gave just a single answer. On average 1.7 answers were given per participant.

The second experiment inserted the direct instruction to give only one answer:

S5 What is your main motivation in participating in sports or physical activities? Please select only one reason.

The instruction was only adhered to by an additional 6 percent of the sample. Sixty-three percent of respondents gave a single answer.

The resultant two data sets – given sample sizes of 262 and 245 – are, for all intents and purposes, identical:

But what precisely does this data represent? It does not represent the single main motivation of participants, since so many people selected multiple answers. The only reason perhaps that everyone didn’t give multiple answers is that some people inferred a single answer from the question in the first experiment and others obeyed the instruction to give just one answer in the second experiment. Therefore, the data does not represent “all reasons.”

The only way to be 100 percent sure of getting a single answer is to program the online survey to only accept a single answer. Since so many people in this research example wanted to give more than one answer, the better design would have been to ask for all motivations and, from this set of answers, if the participant offered more than one answer, ask them to pick their main one. Designing the question in this way will cost a little more in programming as well as a tiny amount of extra interviewing time but the result is a much cleaner data set – one with meaning.

More and more we are realizing that participants take little notice of explicit instructions. This seems to be a human trait, not a failing of online panelists. Researchers must find effective ways of ensuring their questions are answered correctly and as intended.