Editor’s note: Emily Geisen is senior XM scientist, Qualtrics, a Provo, Utah, and Seattle-based software company.
Imagine that the local dairy farmers in your area want to know how much milk people drink in a week. So they send out a questionnaire that asks, “In a typical day, how many glasses of milk do you drink?”
John and Selma both drink 12 ounces of milk every day. Selma, however, thinks of a “glass” as exactly 8 ounces, so she says she drinks 1.5 glasses of milk each day. John, on the other hand, thinks of a “glass” as the object from which he drinks. He is not thinking about how much milk the glass holds. He says he drinks one glass per day. Although John and Selma drink the exact same amount of milk a day, they provide very different answers because they interpreted the word “glass” differently.
Tamara also says she consumes one glass of milk a day. But when she reads the word “milk,” she thinks of soymilk because that’s what she drinks. But the dairy farmers just want to know about cow milk consumption, so the answer Tamara gives isn’t quite what they were asking about, either.
And the question is still loaded with other ambiguities. Some respondents may report the milk they use in cereal or cooking, while others will not because they do not consider that “drinking.” Most people will be able to answer the question – and the answers will look reasonable – but the data won’t be very accurate.
As researchers, we like to ask questions to measure people’s attitudes, opinions and behaviors. But if respondents don’t understand the questions in the same way, the data that’s meant to drive action and insight won’t be reliable enough to do so – or worse, it will provide the wrong insight and inspire the wrong action.
So how do we avoid wasting time and money on faulty data? One way is to invest in cognitive testing.
Cognitive testing – or cognitive interviewing, as it’...