Editor’s note: Pete Cape is knowledge director, and Jackie Lorch is vice president global knowledge management, at Survey Sampling International, Shelton, Conn.

A client recently contacted our firm, Survey Sampling International (SSI), to express concern about speeding respondents in her sample. In contrast to the client’s own pre-tests, which indicated 25 minutes as a reasonable completion time for the questionnaire, it looked as though 8 percent of the respondents had taken less than 10 minutes to complete the study - an impossibly fast time, the client believed. 

The client questioned why these “extreme speeders” had not been removed from the study, leaving her with only those people who would complete the study at a slower, more reasonable pace.

This was a fair question. Sample providers are in the business of supplying careful, attentive respondents, and as standard practice, SSI, along with most sample providers, supports client projects by replacing respondent cases that display quality issues, including speeding.

But we were curious and decided to look into the data in more detail. Was there something unusual about this study design or this questionnaire that could have caused or contributed to a tendency to speed? And, more importantly, would removing results from the fast survey-taking “hares,” and retaining only the tortoises who had completed in a slower, more deliberate manner, bias the survey data in some way?

An examination of the questionnaire and the data provided some answers.

As expected, the fastest survey-takers were more likely to straightline their answers (or at least show very little variance in their responses). As Table 1 shows, one-third of those taking less than 10 minutes to complete the survey straightlined the first question.

As Table 2 shows, the outcome was similar on question two.

Jumping forward to Q14 (Table 3) shows the “problem” getting worse (although Q14 has only five items, so straightlining is a more reasonable response).

However, the key finding here is that there was a strong correlation between speeding respondents and respondents who had little or no interest in the survey topic. Respondents were asked how much they cared about the survey topic, and Table 4 clearly shows that those for whom the issue is of no or little concern were more likely to complete the questionnaire in under 15 minutes.

Similarly, those who completed faster were more likely to have said they have no consideration of the issues or they were a low priority. Fifty-seven percent of those who had no interest in the topic completed the questionnaire in less than 15 minutes, compared to 22 percent of the people who said they cared a lot about the subject.

Straightlining is not necessarily a problem when due to lack of saliency - assuming the respondent straightlines the “right” answer. In the case of some of the questions in this survey, someone who had no interest in the topic could legitimately straightline through many of the responses.

So what to do? Industry standard practice says we need to replace the speedsters. But if we do that, we are very likely to replace them with people who care about the topic, thus biasing the answer to the “How much do you care about this issue?” question.

Further if respondents who did not care about the topic were rushing through the survey, they might also be more likely to drop out. After examining the data, this hypothesis (Table 5) was also proved correct.

The dropout rate among those respondents for whom the survey topic was not even a consideration was more than 50 percent higher than the average drop rate for the survey.

So, does the industry standard practice of replacing dropouts and speeders with “fresh, attentive” sample cause a worrisome bias in the survey data itself? Should the practice be changed? And how might speeding behavior be discouraged?

The study had some characteristics which could have made it a fertile environment for speeding:

•   It was very long, at 25-30 minutes, for a topic which ranks low on the “passion” scale.

•   It was quite challenging to complete, with 67 questions asked about one specific item, followed by an almost identical set of questions to be answered for another item, followed by the same questions relating to two further items - over 250 questions in total.

•   The survey also assumed that respondents had definitely purchased each specific item, possibly forcing a respondent to answer detailed questions about a product they had never bought.

•   The study contained multiple grid-style questions, and their impact can clearly be seen by analyzing dropouts. Of the 134 who completed the last screener but not the whole questionnaire, only one was left at the last of the grid/product questions. Large dropout rates are seen at the first sight of the grid (Table 6), the first repeat of that grid and then the change to a new product and the subsequent repeat of that grid.

How could the survey be improved?

•   Either remove some of the questions or rotate a subset of the questions for each respondent to provide a shorter questionnaire experience for everyone (i.e., 4,000 people answering 160 questions provides more data than 2,500 people answering 250).

•   Ensure that the questions are ones that the respondent is able to answer (i.e., the questions should refer to a product that the respondent has bought).

•   Identify the interest level at the start of the survey, and branch those respondents who have less interest in the topic to a shorter subset of questions, thus retaining important answers from the less-involved subset of the population within the data set.

•   Where possible, introduce items into a grid question that make straightlining a less realistic option. While not preventing the behavior, it does make identification easier.

Deserve a second look

The lesson learned from this case study is that speeding-respondent situations deserve a second look. The questionnaire design and response data should be carefully examined whenever poor respondent behavior is suspected, because there may be legitimate reasons why the respondent has sped through the survey. Moreover, if the population under study is made up of “animals” of all types, merely replacing all the hares with tortoises may put the researcher on a fast track to poor-quality research results.