How the role of the qualitative researcher sharpens in the age of AI
Editor’s note: Jiten Madia is the founder of myTranscriptionPlace, a transcription and translation services firm, and Flowres (flowres.io), an online qualitative research platform. Madia is an alumnus of Chicago Booth and holds an MBA in marketing from NMIMS, Mumbai. Find Madia on LinkedIn.
For the better part of the last three years, qualitative researchers have been engaged in an important (and now settled) debate: Should AI have a role in qualitative data analysis? Arguments on both sides have been made. Pilots have been run. And early adopters have shared results.
Today, it’s time to move past that question. According to the Qualtrics' 2026 Market Research Trends study (registration required), which surveyed over 3,000 researchers across 17 countries, 95% of researchers now use AI tools regularly or are experimenting with them. The dividing line is no longer between those who use AI and those who do not. Instead, it is between those with a clear AI strategy and those still finding their way. AI is already doing the work of coding transcripts, identifying patterns and generating initial summaries. The more productive question now is not whether AI belongs in qualitative research, but what the qualitative researcher becomes when AI handles the heavy lifting considered inherent to qualitative data analysis.
The answer, I believe, is that the researcher’s role does not shrink. It sharpens.
The burden of operational dependency
To understand where we are headed, it helps to be honest about where we have been in the past. A significant portion of a qualitative researcher’s working life has always been consumed by tasks that are necessary to glean consumer insights. But these (often iterative) tasks are not where researchers add their distinctive value.
Consider the typical flow of a multimarket qualitative project. Transcripts need to be obtained in time. This is often done via project managers but still represents a dependency that the researcher must track. Content analysis and coding need to happen before the researcher can begin building a coherent story. Even the act of notetaking during sessions (which researchers rightly value as a thinking tool) can become a distraction when it pulls attention away from observing the nuance of what a respondent is communicating.
Many of these operational tasks have traditionally been outsourced. Transcription to specialist firms. Coding frameworks to junior team members. Content analysis to platforms. But outsourcing does not eliminate dependency; it merely redistributes it. The researcher is still left waiting. The researcher still manages the handoff. And during that wait, momentum is lost, hypotheses go stale and the window for deep interpretative work narrows.
AI changes this equation. When transcription is near-instant, when initial coding happens in minutes rather than days, when pattern identification surfaces alongside the data itself, the researcher is no longer managing dependencies. They are free to do what they were trained to do – add value by thinking, evaluating, comparing and analyzing.
What researchers can do when AI takes on the grunt work
If AI takes on the operational and analytical grunt work, how should the qualitative researcher spend their time? Four areas stand out:
1. Immerse into the data, even while it is being collected. Researchers should attend as many live sessions as possible. Not to take notes (AI can now handle those), but to build an intuitive feel for what is happening beneath the surface of respondent replies. This includes watching sessions conducted in languages that the researcher may not be native to. The goal is immersion in the texture and emotion of the conversation, which comes from observing body language, silence and vocal patterns. After all, the best qualitative insights are proven to have emerged from those researchers soaked in the data, rather than those processing it at arm’s length.
2. Verify and pressure-test AI outputs. AI is fast and can be thorough. Yet, it isn’t infallible. Verification becomes a critical function, and one that organizations should invest in seriously. Junior researchers, traditionally burdened with notetaking and content tabulation, can be redeployed as verification specialists. Their job: cross-check AI-generated themes against raw data, flag interpretive leaps that lack evidentiary support and ensure that the analytical output is grounded rather than plausible-sounding. Although some purpose-built qualitative analysis platforms have begun building such verification workflows, human judgement in verification remains irreplaceable.
3. Test, apply and choose from multiple analytical frameworks. This is perhaps the most exciting opportunity for researchers who have now been rid of grunt work. When coding and pattern-spotting are no longer bottlenecks, the researcher gains the ability to run data through multiple interpretative lenses. The same dataset can be examined through Maslow’s hierarchy of needs, through Hofstede’s cultural dimensions, through Kapferer’s brand identity prism model and many more. Each framework illuminates different facets of the dataset. Previously, time constraints forced researchers to pick one analytical approach and commit to it. AI removes that constraint. As Christou has argued, the key is to treat AI as an analytical partner, where prompt-refinement and careful vetting will prevent biased or fictitious findings. Now, the researcher’s skill is to know which frameworks to apply, how to synthesize outputs and what the combined picture reveals, that no single lens would have shown.
4. Focus on meaning-making and client context. The most valuable thing a senior qualitative researcher brings to a project is not their ability to code data. Instead, it’s their understanding of the client’s world. They know the internal politics. They know which findings will resonate with the CMO versus the brand manager. They understand the strategic constraints that shape what the organization can actually act on. AI has no access to any of these facts. Researchers who transform well-coded, well-patterned data into defendable stories that move organizations toward a specific decision have always been wanted and valued.
Raising the bar on what human-led qualitative research must deliver
For years, the qualitative research industry’s defense against AI encroachment has been the claim that machines cannot interpret and determine meaning. Today, that claim is becoming decreasingly tenable. Large language models are demonstrably capable of producing coherent, contextually aware and (occasionally) even insightful interpretative outputs.
But the picture is more nuanced than it appears. In a widely cited study, David L. Morgan found that AI performed reasonably well at reproducing concrete, descriptive themes but was consistently less successful at locating subtle, interpretative ones. Subsequent studies have reinforced this finding. At Quirk’s LA 2025, one particular researcher expressed excitement about AI’s ability to analyze thousands of open-ended responses in hours, while another cautioned that the industry is drifting too far from keeping research human-centered (Rival Technologies, 2025). Both perspectives are valid. The question is not which camp is right, but how do practitioners and users hold both truths simultaneously?
I believe human researchers will remain several steps ahead of AI in the work that matters most, provided they invest in staying ahead. Interpretation is not a static skill. Taste, judgement, industry context, the ability to sense what is significant and what is noise, the capacity to hold contradictory findings in tension and arrive at a synthesis that serves a client’s specific strategic moment … these are capabilities that would deepen if researchers practiced them and, conversely, would atrophy if researchers turn complacent.
Researchers who thrive in this new landscape will be those who treat AI not as a threat to resist or a tool to grudgingly adopt, but as an accelerant that raises the bar on what human-led qualitative research must deliver. If AI can produce a competent thematic analysis, then competent thematic analysis is no longer the deliverable. The deliverable is the insight that sits on top of the analysis – the interpretative leap, the strategic recommendation, the finding that changes how a client thinks about their category, customer or brand.
The new standard for qualitative research
The qualitative research industry is at an inflection point, where AI is replacing parts of the qualitative researcher’s job that were never the reason why researchers were most valued.
The debate about whether to use AI in qualitative research is over. The question now is whether researchers will seize the opportunity to elevate their craft; to spend less time chasing/ managing transcripts and coding spreadsheets and more time doing the deep, interpretive, context-rich work that no algorithm can replicate.
Researchers who make that shift will find that their role is not diminished. It is, for the first time in several decades, fully focused on what actually matters.
References
di Gregorio, Silvana. “The state of AI in qualitative research in 2025.” Lumivero, 2025. https://lumivero.com/resources/blog/state-of-ai-in-qualitative-research/
Morgan, David L. “Exploring the use of artificial intelligence for qualitative data analysis: The case of ChatGPT.” International Journal of Qualitative Methods, 22. 2023. https://doi.org/10.1177/16094069231211248
Webster, Will and Davis, Rachad. “2026 Global Market Research Trends Report.” Qualtrics, 2026. https://www.qualtrics.com/articles/strategy-research/market-research-trends/
Claveria, Kelvin. “How brand researchers are elevating insights: Quirk’s LA 2025 recap.” Rival Technologies, 2025. https://www.rivaltech.com/blog/quirks-la-2025