Editor’s note: Stephen Turner is partner at Fieldwork, a Chicago-based market research firm. 

As a life-long marketing researcher, I’ve been asking people what they like, what they don’t and why for 40+ years. I’ve worked on both qualitative and quantitative projects, but I ended up focusing almost entirely on the former.

It won’t surprise anyone who knows me that, in my attempt to understand the inability of election polling to give us a more trustworthy ride into this year’s nail-biter than they did in 2016, I have come to believe that the pundits (once again) didn’t do enough good qualitative as part of their effort. Seems I live in my own echo chamber.

This is not to say that all pollsters failed to improve their approach. David Leonhardt of the New York Times tells us (November 13, 2020 – citing a thorough discussion from FiveThirtyEight.com) that “polls have still been more accurate over the last four years than they were for most of the 20th Century.” Indeed, my understanding is that error margins on this year’s presidential race were generally no worse than they were in 2016 despite an even more chaotic sociopolitical environment this time around.

This leads me to believe the polling industry put a lot of thought and effort into sampling (and weighting) polls to counter what they had learned regarding the response biases leading them to misread President Trump’s shocking 2016 victory. 

I believe the problem is less how many or what sorts of people were asked about their intentions, but rather what and how they were asked. 

A qualitative environment 

Qualitative research’s most powerful asset is addressing the validity side of inquiries, even if the reliability side is ignored entirely (which is often true in qualitative projects). 

The truth of this contention can be best understood by citing examples of how respondents in a qualitative environment are free to point out (directly or indirectly) that the questions we raise may not be at all clear in their intent, let alone relevant. 

Let’s look at an example:

Moderator’s question: How would you rate William Barr’s performance at this point in time?
Respondent: Who’s William Barr?

In a quantitative study (e.g., polling), respondents such as the one above are apt to answer the question despite the absence of meaningful response criteria. They may do this for any number of reasons. They may give an answer to avoid displaying their ignorance (there’s little reason to reveal one’s lack of knowledge in the context of a poll). Or maybe they make up an answer reflecting their perception of what the asker wants to hear (the Bradley effect) for the sake of propriety – especially if they’re being paid to participate. 

But motivation doesn’t matter. The distribution of responses will form a bell-shaped curve against which reliability can be measured – even if the majority of responses have no meaning in terms of the questioner’s objective. 

Let’s review another example, noting that the questions themselves can be based on spurious assumptions:

Moderator’s question: How would you rate this brand of maple syrup in terms of sweetness – too sweet, not sweet enough or about right? 

Respondent: I suppose it’s “about right” but the truth is that all maple syrup is sweet – that’s what it’s supposed to be – never had maple syrup that wasn’t sugary sweet. I don’t think anyone differentiates between brands on that basis. What I care about is how distinctive the maple flavor is.

It would be easy to cite a host of examples that would elicit statistically reliable answers but – due to ignorance, posturing, misinterpretations or many other reasons having to do with validity – are destined to mislead.

Testing a question’s validity is a critical part of any serious research design. 

Unfortunately, it’s an exercise often skipped because a given question under consideration has been used successfully for countless projects over decades of research.

But words and meanings change over time. Questions, even very basic ones, can become distorted in relatively short order. A few years ago many wouldn’t have thought that the traditional respondent gender question would need to be reworded. 

The world is facing more frequent and nuanced changes in explicit and implicit language today than at other times in recent history. This is in part due to drastic changes in our modes of communication. 

Thanks to COVID-19, we don’t schmooze over beers with our buddies anymore – but we also don’t call them to catch up. We e-mail, chat, text, Twitter and Zoom. All of these things have an effect on how people communicate and what the specifics of those communications really mean.

Snail mail isn’t for interpersonal communications any more – it’s for officially notifying and advertising. Spotify sings to us. Uber brings us dinner. We get our underwear from Amazon, spiritualty from YouTube, news from a satellite and sex from Reddit. We go to the opera in bed, dress for meetings from the waist up, watch Monday Night Football with a cheer track and doom our children to a parochial view of the world by tutoring them ourselves.

Every one of these considerations changes the words we use to describe them. 

Language 

The questions we ask – the very language itself – needs careful tuning to be valid in today’s mercurial world. And no, I’m not suggesting we should couch our questions in hipster. I believe researchers must test verbiage out among ordinary people – giving them the freedom to express their thoughts and feelings in their own words – before conducting quantitative research. Otherwise it may be an exercise in delusion. 

I am writing to encourage the marketing research community to keep the power of qualitative research well integrated in their research designs. Employ it early and often as we struggle to understand the present and anticipate the near (and not-so-near) future.