Listen to this article

Curation, then implementation

Editor's note: Alison Munsch is principal of Insights for Actions Research in New York and is a tenured professor of business. She brings over 20 years of qualitative and quantitative research experience, is a trained moderator and facilitator and holds an M.A. in applied research from Queens College (CUNY) and a Ph.D. in psychology. Find Munsch on LinkedIn.

Artificial intelligence is reshaping the business landscape at a remarkable pace. Recent “State of AI” reports from McKinsey & Company indicate that a growing majority of organizations are using AI, particularly generative AI, in at least one business function, with particularly high activity in marketing, sales, product development and customer operations (McKinsey & Company, 2024, 2025). At the same time, McKinsey estimates that generative AI could add between $2.6 and $4.4 trillion in annual economic value across 63 use cases, with a substantial portion of that potential concentrated in customer-facing and insight-driven disciplines such as marketing and related research activities (Chui et al., 2023).

Despite this rapid adoption and the enormous projected value, many organizations report that they have not yet realized significant, enterprise-level profit impact from AI (McKinsey & Company, 2025). In other words, tools are spreading faster than the capabilities, frameworks and culture needed to use them wisely (McKinsey & Company, 2024, 2025). McKinsey’s work on AI risk also shows that explainability and inaccuracy are among the top concerns for adopters, while relatively few organizations have systematic programs in place to mitigate these issues, including bias in model outputs (McKinsey & Company, 2025). 

This tension is especially visible in marketing research and insights, where flawed or biased outputs can directly distort customer understanding. Industry reports indicate that although a growing share of insight suppliers now embed generative AI into their deliverables, data quality and synthetic-data contamination have become key barriers to trustworthy research (Greenbook, 2025). 

Other studies of marketing and research professionals similarly find that bias and fairness are among the most frequently cited concerns when using AI in advertising and market research contexts (Samaya and Singh, 2025). In response, professional bodies such as ESOMAR (with its International Code on Market, Opinion and Social Research and Data Analytics) and the Market Research Society of India have updated their codes to tighten ethical and transparency standards specifically for an AI-driven insights industry (Market Research Society of India, 2025). Together, these developments confirm that the gap between AI’s promise and its responsible use is felt most acutely in domains such as market research, where the stakes of biased or low-quality evidence are immediately visible in business decisions and in how consumers are represented.

Against this backdrop, some commentators characterize traditional market research as obsolete in an AI-driven marketplace; however, such claims often conflate automation with insight generation. They overlook the discipline’s methodological foundations and continued strategic value, as if algorithms were about to replace analysts outright. But the truth is more nuanced and more promising. AI is not a replacement for market research. It is a tool that, when used thoughtfully, can amplify human intelligence, accelerate insight generation and even democratize access to decision-making data. Rather than rendering research obsolete, AI is catalyzing its transformation.

This article explores the evolving relationship between AI and marketing research, drawing on recent practitioner data (n = 32; see sidebar), thematic analysis and emerging frameworks for integration and bias mitigation. It also offers a framework for moving from experimental use of AI to responsible practice.

Play a better game

A useful analogy is to compare the evolution of market research to a game of chess. According to IBM (n.d.), the 1997 victory of its Deep Blue supercomputer over world champion Garry Kasparov marked an inflection point in computing. Today, anyone with a smartphone can access a chess engine stronger than any human. But even now, the best results often come from “centaur” chess, with a team of human players using machine suggestions to drive creative, strategic decisions. The real advantage of AI is not autonomy, it is augmentation. Human intuition, contextual judgment and experience address the limitations of algorithmic systems, while AI contributes speed, scale and pattern recognition beyond human capacity. 

Organizations generate the greatest value when AI is intentionally integrated into human workflows rather than deployed as a replacement for expertise (Wilson and Daugherty, 2018). The same logic applies to marketing research: The human strategist who knows what questions to ask and how to interpret messy reality is more critical than ever but they now have a powerful partner in AI. The challenge is not whether AI will “take over” but how researchers can play a better game with AI as a strategic ally. 

AI Adoption is transitional, not transformational (yet)

The majority of respondents report cautious confidence or describe themselves as “still learning” how to incorporate AI into workflows. Overall confidence in using AI in workflows is moderately positive (mean rating 7.2 on a 10-point scale), but relatively few respondents place themselves at the very top of the confidence scale in their AI use. More than half of the respondents surveyed report that they are exploring (20%) or piloting AI tools (37%). This indicates that although AI adoption is underway, it remains in its nascent stages and continues developing, having not yet reached full maturity. This finding mirrors broader industry evidence that many organizations are piloting AI but have not fully embedded it into scalable, repeatable research workflows (McKinsey & Company, 2025).

Although practitioners are not highly confident in their use of AI, it is no longer a distant concept for most. On a 10-point scale, the average familiarity with AI tools is mildly positive at 6.4, indicating that respondents generally see themselves as more familiar rather than novices. At the same time, frequency-of-use data show that AI is already embedded in many day-to-day workflows: 30% of respondents report using AI tools daily, 48% use them weekly and the remaining 22% engage with AI tools at least monthly. These patterns suggest that AI has moved beyond experimentation into regular practice for most respondents, even as many still describe themselves as “learning” or only “moderately familiar.” This gap between modest self-rated familiarity and frequent use reinforces the need for structured frameworks and training to ensure that regular AI use is also confident, critical and methodologically sound rather than ad hoc.

Moreover, even research and insights professionals who report moderate or high familiarity with AI are not yet fully applying AI tools in practice. The pattern echoes McKinsey’s observation that many firms experiment at the edges with AI but lack the operating models and governance needed to capture full value (McKinsey & Company, 2024, 2025).

Perceived challenges and bias risks

When asked about their primary concerns regarding the use of AI in market research, respondents most frequently selected challenges related to data quality.

Concerns include: 

  • data quality and accuracy (81%) 
  • bias in AI outputs (69%)
  • client trust and transparency (42%)
  • lack of internal expertise (42%)
  • ethical or privacy concerns (35%)
  • cost or ROI uncertainty (19%)
  • no major concerns (4%)

Together, these findings reflect a broader unease that AI systems may produce results that appear authoritative yet are potentially misleading if not carefully governed. Concern about client trust and transparency further underscores that stakeholders require not only reliable insights but also clarity regarding how those insights were generated and validated. Ethical and privacy considerations further complicate the landscape. Practitioners express concern that opaque, black-box systems may obscure underlying assumptions, limit accountability or inadvertently amplify existing inequities, particularly in sensitive applications such as segmentation, targeting and diversity-related analysis. Additional concerns include the risk of AI-generated hallucinations (fabricated or inaccurate outputs presented as factual) and the challenges associated with establishing consistent, standardized deployment practices across the organization.

Taken together, these findings reinforce a central argument of this article: AI must be curated rather than blindly deployed. Professionals value speed and efficiency, but not at the expense of nuance, context and fairness. The market researcher's role is increasingly to design processes and guardrails that ensure that AI augments rigor rather than undermines it.

What practitioners are saying

Thematic analysis of open-ended responses in the practitioner survey yielded the following five themes with illustrative quotes:

1. Train thoughtfully
“We need stronger time investment in training users … AI is only as smart as the person using it.”

Practitioners are calling for intentional training – not just tool access – so that teams know when, where and how to use AI in research workflows.

2. Balance speed and judgment
“Dashboards and automation are great – but not at the risk of making AI more valuable than the researchers themselves.” 

AI is valued for efficiency but respondents emphasize that speed cannot substitute for judgment, context and empathy.

3. Bias and governance are central 
“Prompting is key … AI tends to agree with generic input that leads to biased assessments.”

“Proper governance is essential in marketing and marketing research.”

Echoing global conversations about AI ethics, practitioners seek governance models and industry codes such as the updated International Chamber of Commerce (ICC) and ESOMAR International Code on Market, Opinion and Social Research and Data Analytics guidelines, to ensure accountability in AI deployments (Market Research Society of India, 2025; Research World, 2025).

4. AI as infrastructure, not magic
“Integrations, privacy, replicable use cases – AI is a toolkit, not a genie.”

Respondents repeatedly emphasized that AI creates real value only when it is embedded into existing systems and workflows, not when it is treated as a standalone novelty. In their view, AI should function like infrastructure, integrated via APIs, governed by clear privacy and security standards and tied to repeatable, documented use cases. When AI is treated as a peripheral add-on rather than embedded within core research workflows, it is often perceived as difficult to govern and prone to misuse. By contrast, when thoughtfully integrated into existing tools and processes, AI functions as a scalable performance accelerator rather than a novelty.

5. Respect the context
“AI cannot see the big picture … it doesn’t know what is truly important. That’s the researcher’s job.”

This underscores a core theme in both practice and scholarship: AI can recognize patterns but it doesn’t understand stakes, culture or long-term brand implications without human interpretation.

Mitigating bias with the PAIR framework

To move from experimental use of AI to responsible practice, market researchers need more than tools; a process is recommended. The PAIR framework – problem formulation, AI tool selection, interaction and reflection, originally developed for education in the age of generative AI, offers a structured, human-centric way to integrate AI while actively mitigating bias (Acar, n.d.). Although designed for pedagogy, its core tenets of human agency, skill-building and responsibility translate directly into applied market research, where biased outputs can have immediate consequences for how consumers are understood and targeted (Bertoncini, 2025; Samaya and Singh, 2025). Each letter of acronym encourages specific considerations for marketing research and insights practice as follows:

1. Problem formulation: Making bias visible up front
In market research, poorly framed questions can encode bias before any data are collected or AI is invoked. Applying PAIR begins with explicitly defining the problem. What decisions need to be made based on objectives and the resulting research problem, which may include, but are not limited to, population characteristics and constraints, including which segments might be underrepresented or systematically misclassified? For example, teams can identify where historical data may reflect skewed sampling or discriminatory practices and plan compensating steps (e.g., oversampling underrepresented groups, flagging sensitive variables). By foregrounding these issues in the problem-formulation stage, researchers reduce the risk that AI simply amplifies historical bias in an automated manner (Acar, n.d.; Munsch, 2018).

2. AI tool selection: Choosing systems with guardrails, not just features
The second step, AI tool selection, prompts researchers to evaluate tools not only for accuracy and convenience but also for interpretability, transparency and bias controls. For instance, researchers can require vendors to disclose training data sources, bias testing procedures and explainability features before integrating a model into their workflow (Greenbook, 2025; Market Research Society of India, 2025). In practice, this means rejecting black-box tools for high-stakes applications such as segmentation or predictive modeling of vulnerable groups and instead favoring systems that allow audits, error analysis and human override.

3. Interaction: Prompting, stress-testing and cross-checking outputs
In PAIR’s interaction phase, researchers do not passively accept AI outputs, they actively probe them. In market research, this can take the form of:

  • Testing how different prompts influence sentiment summaries or persona generation.
  • Comparing AI-generated segment descriptions against known quantitative patterns and qualitative findings.
  • Running stress tests where edge cases (e.g., niche or marginalized consumer groups) are explicitly queried to see whether the model stereotypes or erases them.

This interactive, experimental stance helps surface both overt and subtle biases, such as overindexing on majority behaviors or reinforcing gendered or racialized assumptions, before insights are presented to clients or internal stakeholders (PNAS, 2025; Samaya and Singh, 2025).

4. Reflection: Auditing bias and documenting learning
Finally, the reflection step encourages researchers to evaluate not only what AI contributed but also how it may have distorted the picture. Researchers can build post-project reviews that explicitly ask:

  • Where did AI outputs diverge from other data sources?
  • What kinds of bias or blind spots did we detect and how were they corrected?
  • How did our choice of prompts, tools or training data shape the narratives we produced?

Formalizing these reflections builds an auditable knowledge base and strengthens enterprise learning, ensuring that individual lessons become embedded organizational safeguards. This aligns with the responsibility-centric aspect of PAIR, in which AI is treated as a powerful but fallible collaborator whose outputs must be critiqued, contextualized and, when necessary, corrected by human judgment (Acar, n.d.; Research World, 2025).

Taken together, PAIR supports a human-centric, skill-centric and responsibility-centric approach to AI in market research: AI is used to augment, not replace, researcher insight; teams develop transferable skills in prompt design, model critique and bias detection; and every use of AI is anchored in ethical awareness rather than blind efficiency. In this way, PAIR does more than structure AI adoption – it becomes a practical roadmap for mitigating bias while leveraging AI to amplify the reach and depth of market research.

The future is hybrid

Market research is not dead; it is being reborn. In an age when data are abundant but understanding is scarce, human insight remains essential. AI can reduce routine tasks, help you surface patterns in large datasets and expand access to insights across organizations. However, without ethical vigilance, upskilling and structured approaches such as PAIR, AI risks deepening bias and undermining trust. This concern becomes even more pressing as agentic AI systems – tools capable of initiating actions, chaining decisions and operating with greater autonomy – begin to enter research workflows.

As in chess, the advantage does not belong to humans or machines alone (IBM, n.d.) but to teams that learn to play in concert, combining human judgment with machine intelligence. As AI systems grow more capable and increasingly proactive, the research teams that lead will be those that keep human accountability at the center. In this future, AI is not a replacement for expertise but a disciplined partner; designed, governed and continually refined to mitigate bias in both machine outputs and human decision-making. Because bias does not reside in algorithms alone; it also lives in the assumptions, incentives and interpretations that shape how those algorithms are built and used. Marketing research in the age of AI is not simply about accelerating insight; it is about accelerating insight responsibly, combating bias wherever it appears and ensuring that speed never outruns human judgment. 

References

Acar, A. (n.d.). “The PAIR framework: A guide for educators integrating generative AI.” Journal of Information Technology in Education, 33(1), Article 9. https://scholarworks.lib.csusb.edu/jitim/vol33/iss1/9/

Amin, M. R., Asbi, A., Sivakumaran, V. M., Kim, J., and Septiarini, E. (2025). “Artificial intelligence (AI) adoption in marketing strategies: Navigating the present and shaping the future business landscape.” International Journal of Information Management, 102799. https://doi.org/10.1016/j.ijinfomgt.2025.102799

Bertoncini, A. L. (2025). “AI and cognitive biases in ethical decision-making.” AI, 1(1), 23–38. https:// https://doi.org/10.3390/aieduc1010003

Chui, M., Manyika, J., and McKinsey Global Institute. (2023, June 14). “The economic potential of generative AI: The next productivity frontier.” McKinsey & Company.

Fox, C., and Schuster, G. (2025). “AI biases in marketing practice: Perception and action strategies.” International Conference on Gender Research, 7(1), 45–57. https://papers.academic-conferences.org/index.php/icgr/article/download/3226/3080 

Greenbook. (2025). “The role of artificial intelligence in market research: Opportunities and limitations.” Greenbook Insights. https://www.greenbook.org/insights/the-prompt-ai/the-role-of-artificial-intelligence-in-market-research-opportunities-and-limitations

Market Research Society of India. (2025, May 5). “MRSI adopts ICC/ESOMAR 2025 code: Tightens ethics for an AI-driven insights industry.” The Economic Times.

McKinsey & Company. (2024, May 30). “The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.”

McKinsey & Company. (2025, March 12). “The state of AI: How organizations are rewiring to capture value.”

Munsch, A. (2018). Guest author: “Problem definition for data-driven PR.” In J. Eggensperger and N. Redcross (Eds.), Data-Driven Public Relations Research: 21st Century Practices and Applications (pp. 66–73). Routledge.

Munsch, A. (2026, February). “AI and the future of market research: Practitioner survey findings.” Unpublished survey report. Insights for Actions Research.

IBM. (n.d.). Deep Blue. IBM History. https://www.ibm.com/history/deep-blue

PNAS. (2025). “AI-AI bias: Large language models favor communications produced by LLMs.” Proceedings of the National Academy of Sciences, 121(4), e2415697122. https://doi.org/10.1073/pnas.2415697122

Research World. (2025). “AI in market research: Five rules to live by.” ESOMAR.

Samaya, S. S., and Singh, A. (2025). “AI in market research and advertising: Bias in AI algorithms and its impact on local businesses.” International Research Journal of Education and Technology.

Times of India. (2026, January 18). “IIM Lucknow research calls for an ethical reset in AI-driven marketing.” The Times of India.

Wilson, H. J., and Daugherty, P. R. (2018). “Collaborative intelligence: Humans and AI are joining forces.” Harvard Business Review, 96(4), 114–123.

Methodology

This article draws on both secondary and primary research. First, a targeted examination of peer-reviewed journal articles and industry reports addressing AI in marketing and market research was conducted. Second, an online survey using an availability (convenience) sample of research and insights professionals across industries, regions and levels of decision-making responsibility was completed in January 2026. Of approximately 150 practitioners invited, 32 completed the questionnaire, yielding a response rate of about 21%, which is typical for online surveys of busy professionals using convenience sampling. The survey included closed-ended and open-ended questions on AI usage, concerns, perceived benefits and challenges (Munsch, 2026).