What marketing research and insights can teach us about bias and the replication crisis 

Editor’s note: Rich Raquet is president and founder of TRC, a Penn.-based market research firm. This is an edited version of a post that originally appeared under the title, “More than one right answer?” 

As you may be aware, academia is currently wresting with a “replication crisis” – a paper is published with stunning findings, gets lots of attention and then when other academics try to duplicate the results they get a very different answer. Many causes are often cited from low/poor samples (often small numbers of college students) to the bias of the researcher. Concerns like these are not new to market research. We spend a lot of time working to get representative samples, considering the limitations of sample size and trying to avoid any bias in the research we do.

Martin Schweinsberg of the European School of Management offers another possibility for the crisis and one that I think we could do well to think more about. He shared the same data (3.9 million words from nearly 800 comments made on Edge.org) with 49 different groups of researchers and asked them to determine if the data could prove a hypothesis: “Women’s tendency to participate in the forum would rise as more women participated.”

The results were mixed – 29% agreed with the hypothesis, 21% disagreed and the remainder found no evidence either way. While there may have been bias in interpreting the data, Schweinsberg suggests that these differing results may not be the result of bias, but rather they are caused by good faith choices made on things like how terms are defined or the statistical methods employed.

For example, when determining the level of women’s participation, should we look at:

  • The number of words used?
  • The number of characters used?
  • The number of individual posts, regardless of length?
  • Some other means?

As it turns out, seemingly tiny differences like these lead to very different conclusions.

puzzle pieces

Drawing conclusions: quantitative research

In market research, this should not surprise us. Even in quantitative research, where statistics rule, there are times when we must rely on more than just the numbers. Two examples:

  • When analyzing scale question results, we can look at top box, top two, mean, median or perhaps something like a likelihood to recommend score – which is right?
  • When conducting segmentation analysis, we can employ many techniques and reach many different segments – which is right?

In both cases, the “right” answer is the one that is not just statistically rigorous but also the one that most clearly answers the business challenge. For example, with segmentation, the solution is often the one that is the easiest to apply to the market at large or to score a large database.

Even when statistics give us the “right” answer it still might not be “correct.” For example, we always take the results of discrete choice (conjoint) analysis to optimize results toward a client’s stated goal, but that isn’t always the answer the client goes with. Often, there will be other factors that are hard to quantify or go beyond the goal. For example, changing a particular feature might make it easier to gain support internally. Typically, such changes tweak less important features and thus leave a result that is close to the optimal solution.

Drawing conclusions: qualitative research

When it comes to qualitative data, the challenge is even greater. Text analytics offers amazing tools that promise to quantify unstructured data like open-end responses. For many, there is comfort in taking 1,000 unruly verbatim responses and reducing them to a series of numbers, but as noted above, this likely hides much of the story. So what should we do?

First, we should not rely completely on AI to do our work. I believe the better approach is to combine AI tools with a strong analyst. For example, tools that allow an analyst to interrogate text data in a way that goes beyond just cold statistics. A statistic points to an area worthy of study and connects the verbatim comments that back up the statistic. The analyst can then key in on these and determine if the finding is worthy of reporting.

Second, we should be careful not to treat qualitative data like quantitative data. In the best circumstances, we would combine the two to provide a powerful understanding of the business challenge. If we only have one or the other, we should recognize the limitations of each and not come to unwarranted definitive conclusions.

In other words, researchers should continue to do what they have always done … provide thoughtful, reliable and unbiased counsel.