Editor’s note: Rich Raquet is president and founder of TRC, a Penn.-based market research firm. This is an edited version of a post that originally appeared under the title, “More than one right answer?”
As you may be aware, academia is currently wresting with a “replication crisis” – a paper is published with stunning findings, gets lots of attention and then when other academics try to duplicate the results they get a very different answer. Many causes are often cited from low/poor samples (often small numbers of college students) to the bias of the researcher. Concerns like these are not new to market research. We spend a lot of time working to get representative samples, considering the limitations of sample size and trying to avoid any bias in the research we do.
Martin Schweinsberg of the European School of Management offers another possibility for the crisis and one that I think we could do well to think more about. He shared the same data (3.9 million words from nearly 800 comments made on Edge.org) with 49 different groups of researchers and asked them to determine if the data could prove a hypothesis: “Women’s tendency to participate in the forum would rise as more women participated.”
The results were mixed – 29% agreed with the hypothesis, 21% disagreed and the remainder found no evidence either way. While there may have been bias in interpreting the data, Schweinsberg suggests that these differing results may not be the result of bias, but rather they are caused by good faith choices made on things like how terms are defined or the statistical methods employed.
For example, when determining the level of women’s participation, should we look at:
As it turns out, seemingly tiny differences like these lead to very different conclusions.
In market research, this should not surprise us. Even in quantitative research, where statistics rule, there are times when we mu...