Setting a new standard for transparency in marketing research
Editor’s note: Stefanie Francis is founder and CEO of Hootology, a New York-based market research firm.
When I first started in research, I remember being told, "Every study is inherently flawed." My naive, black-and-white Virgo mind couldn’t process that sentence. I had entered the field believing that, with enough rigor, human behavior could be made legible: variables and bias controlled and truth extracted. Two decades later, I now see that what drew me to research in the first place was the promise of organizing and explaining an unpredictable world. Growing up in an emotionally volatile environment, research offered the promise that human behavior might be rational – even orderly. If I could identify the sources of distortion, maybe I could quiet the noise. If I could control all variables, there would be less of the terrifying unknown.
I have spent my career trying to solve for bias in research. And for the first time, I’m confident we’re close – not because bias has disappeared, but because we finally have new tools to design around it. Bias is not a footnote in research; it is the substrate. Entire taxonomies catalog its forms: selection bias, nonresponse bias, confirmation bias, social desirability bias and on. The problem is not that bias exists. The problem is that our industry has spent decades pretending it has been neutralized, at least in research.
Researchers are, by training and temperament, a by-the-book bunch. Precision, accuracy, methodological rigor – these values built modern market research, and I once embodied them fully. But they also bred resistance to change. “This is how it’s done” became a substitute for asking whether it still worked.
Meanwhile, the world moved on.
When speed, AI and legacy methods collide, the cost of bias multiplies
Marketing is faster, more emotional, more culturally fluent. Product innovation faces pressure to connect meaningfully, not just incrementally. Business leaders make high-stakes tradeoffs under conditions of uncertainty – where getting it wrong is expensive, not just financially, but reputationally. The cost of bad insight has never been higher. To actually keep up, “better” must mean richer and more expressive, but it also must mean projectable and reliable. Decision makers carrying real risk need assurance, and that kind of confidence is predicated on methodological self-awareness. This is where many of our inherited practices break down.
For years, quantitative research accepted structural bias as the best we could do. Expert-designed answer lists defined the universe of possible responses before participants ever spoke. The people we claimed to be trying to understand were constrained by the assumptions of those who thought and breathed the topic every day – creating a false sense of completeness about what constituted 100% of a population’s thinking.
At the same time, surveys were overused to the point of uselessness. People began to associate surveys with the “How did we do?” pop-up after retail checkout rather than for things that actually matter – the future of health care, for instance. The result was not more signal, but less willingness to participate meaningfully. Think about it: When was the last time you thought taking a survey sounded like an exciting way to spend your precious time? Today, any quantitative study must be caveated with the fact that it only reflects the views of those still willing to take surveys. Does that group resemble the population we need to understand?
Both quant and qual suffer from another, less discussed flaw. Modern life is deeply digital, dynamic and participatory, yet we continue to ask people to engage through static forms and artificial settings that bear little resemblance to how they think, feel or communicate. We once worried about who was excluded when research moved online. We should be asking the inverse question now: Who is excluded by clinging to outdated formats in a fully digitized world?
Qualitative research carries its own limitations. Moderator bias, group dynamics and the non-projectable nature of small samples were long accepted as inherent constraints. In addition, the historic race and ethnicity biases that influenced who was invited, who was listened to and how insight was interpreted (or, shall I say, misinterpreted). “Directional” became the euphemism for qualitative insight, which basically meant you can’t trust it. Large, strategic decisions demand more. Much more.
And then generative AI tap-danced in and declared itself the cure for human bias. But anyone paying attention knew that promise never stood a chance. AI didn’t erase bias. It scaled it. And this contradiction is at the center of today’s insights industry. We depend on tools that insist they’re neutral even as they bear the fingerprints of the people who built them. We want data to “speak for itself,” but it can’t do that unless someone – a human someone – teaches it the language. AI models inherit the blind spots of their makers. When it’s used in the research process as the feature and not the tool, the fantasy of neutrality becomes actively dangerous.
A modern reassessment of bias
Bias is not a flaw to be eliminated from research. It is a condition to be understood. When we identify it, we learn what we can – and cannot – infer from our data. In essence, we learn what the data does not say.
The path forward requires a modern reassessment of bias (research bias and societal bias) and a new standard for transparency. Today’s systems are more complex – but also far more capable of surfacing meaningful insight.
Designing with self-awareness means acknowledging that, still, no approach is neutral and no dataset is complete. It means treating human input not as a last-ditch failsafe, but as a core design principle. Methodology should articulate its own lens, naming the cultural, economic and historical assumptions embedded within it. Or better, letting humans speak for themselves, in their own words.
Transparency is integrity. And integrity is a differentiator.
This is not idealism; it is strategy. The firms that will lead the next era of insights are not those claiming AI purity, but those willing to say: here is the bias, and here is how we manage it. In an industry long obsessed with truth, honesty has become the most valuable currency.
When objectivity is no longer believable, candor is the only path forward. The future of insights belongs to organizations that preserve the human voice, design with self-awareness, maximize the best of new technologies and control for the rest. All of this while acknowledging bias is not a weakness, but the foundation of trust.
My left brain may still crave control, but my right brain has entered the chat. I now see that control was never the answer; clarity was. The path forward must be designing systems complex enough – and honest enough – to ensure every voice has a real chance to be heard.