Editor’s note: Miguel Conner is marketing director at Chicago-based research and data collection firm qSample. This is an edited version of a post that originally appeared here under the title, “New studies claim survey ‘trap questions’ are questionable for market research.”

Two new studies indicate that the conventional method of using trap questions for online surveys may not be as effective as originally supposed. In fact, trap questions might have unforeseen results – according to both studies – and that is a notion that tends to unnerve the methodical market research industry.

The findings came from a pair of University of Michigan studies on instructional manipulation checks, or IMCs. Both concluded that answering trap questions may alter the way people respond to subsequent questions in a survey.

For those not entirely familiar with IMCs, they are principally the same as trap questions – sometimes called attention checks. In essence, not all survey respondents will pay sustained attention to questions – or even follow instructions – effectively blazing through a questionnaire. These respondents, therefore, tend to dilute survey data.

Consequently, it’s not uncommon for researchers to place safeguards in the form of unrelated questions or instructions at certain intervals of a survey. This hopes to calibrate the focus of respondents or cull those who have no interest in providing usable data.

Here is an example from a social scientist:

So, in order to demonstrate that you have read the instructions, please ignore the sports items below. Instead, simply continue reading after the options. Thank you very much.

Which of these activities do you engage in regularly? (Write down all that apply)

1)    Basketball

2)    Soccer

3)    Running

4)    Hockey

5)    Football

6)    Swimming

7)    Tennis

Did you answer the question? Yes? Then you failed the test.

Another example – perhaps more approachable as it’s found in popular culture – would be in Monty Python and The Holy Grail, in the scene where the magical bridgekeeper tests King Arthur and his knights with a series of questions. The right answers test the mettle of the knights, thereby allowing them to pass across over the Bridge of Death and get closer to finishing their hallowed quest:

Bridgekeeper: Stop. What… is your name?

Galahad: Sir Galahad of Camelot.

Bridgekeeper: What… is your quest?

Galahad: I seek the Grail.

Bridgekeeper: What… is your favorite color?

Galahad: Blue. No, yel…

Galahad is then thrown over the bridge into the Gorge of Eternal Peril. He was a respondent attempting to blaze through the bridgekeeper’s questionnaire.

The two studies from the University of Michigan point that the thinking of respondents may be modified following an IMC or trap question.

In the first study, subjects received a trap question in a math test. Half of the participants completed the trap question before the math test, whereas the other half completed the math test first. Researchers found in this study that completing a trap question first increased subjects’ analytical thinking scores on the math test.

In the second study, subjects also received the trap question in a reasoning task assessing biased thinking. As with the prior test, half of the participants finished the trap question before the reasoning task – while the other half completed the reasoning task first. The researchers discovered that completing the trap question first decreased biased thinking and caused more correct answers. Hence, completing a trap question made subjects reason more systematically about later questions.

All of this, as the lead researchers pointed out, indicates that many past studies may have been affected by IMCs. It’s suggested that deeper thinking may not always be the best state for a respondent during a survey. Instead, an optimal thinking state is where respondents are reasoning as they normally would in daily life. As more research is conducted on the efficacy of IMCs for survey research, it might be in order to focus more on other traditional safeguards such as mitigating response bias or response fatigue.

It should be noted that neither of the studies points to any alarming suppositions of past research. In other words, these findings should not unseat market research from its continued quest over bridges of river sample to the Holy Grail of the best possible data. Market research just has to be persistently vigilant that it and its respondents are in the right thinking.