Why human expertise remains essential in qualitative research
Editor’s note: Tina Launey is director of creative and editorial at Mozaic Group, Lake Mead, Nev.
As AI continues to make deeper inroads into market research, our clients have become increasingly interested in leveraging AI moderation as a means of increasing speed and scope of qualitative research, while simultaneously lowering costs.
While we consider ways to cement client retention in this era of breakneck innovation, it’s important to remember our professional and ethical responsibility, as market researchers, to walk the fine line between embracing the newest innovation and thinking critically about the potential risks and implications so we can help our clients make smart decisions about when and how to implement AI for primary research. Our recent experience on a B2B qualitative study serves as a timely reminder of the enduring importance of human expertise.
Qualitative interviews: A case study on fraudulent participants
Our Fortune 100 client engaged us to conduct a series of interviews with U.S.-based advanced technical decision makers about the tooling they use in their jobs. Given the complexity of the topic and the difficulty of recruiting a niche technical audience, we partnered with a recruitment partner that specializes in hard-to-recruit B2B audiences. The interviews were conducted by a moderator with more than 20 years of experience in consumer and B2B research, including niche technical audiences and topics. We leveraged a platform for virtual qualitative fieldwork, with live audio and video.
Audio only participants: Several interviews into this study, the moderator noticed a trend among participants who’d initially joined the interview with audio only. While these participants consistently provided thoughtful, knowledgeable and very thorough answers to the moderator’s questions, they tended to pause for several seconds before answering each question, eventually raising suspicion in the moderator.
Depending on the nature of the study, we sometimes allow participants to choose whether to join the interviews with cameras on vs. off, although for this study in particular, the research director turned the cameras on for any participant who dialed into the interview with audio only. In most cases, the suspicious participants were hunched in small, poorly lit spaces and were often wearing hoods (or even a towel) that partially obscured their faces.
Identifying fraudulent respondents: With suspicions rising, the moderator challenged one participant with a highly technical question that deviated from the interview script. When the participant was unable to answer the question, the moderator quickly realized he was speaking to a fraudulent respondent who was querying an AI platform for the answers to the interview questions being asked. Comparing a video capture of the interview participant with the LinkedIn profile provided to the recruiter confirmed that the fraudulent respondent was a different gender and ethnicity than the information the recruiter had on file. Further investigation revealed that the fraudulent participant had joined the interview from an IP address outside the United States.
After pressing the recruiter to confirm the identities of all the participants who’d qualified for the study, we learned that every suspicious participant had, in fact, been a fraudulent respondent.
Of course, we are no strangers to respondent fraud; gaming is a well-established trend in quantitative research, and one that we’ve created a laundry list of best practices to combat. Our recent experience with fraudulent qualitative research participants is a good reminder that AI presents new and different opportunities for fraudsters to cheat their way through even high-touch, qualitative studies. The good news? Experienced researchers can sniff out the fraudulent behavior quickly enough to replace the participants with new candidates who pass a rigorous vetting process.
Threats to data integrity in marketing research: Where do we go from here?
AI is a powerful tool in market research – speeding the pace and agility of insights while, in many cases, extending our reach and our dollars. But with new opportunities come new threats to data integrity (see the March 2025 Quirk’s article on AI-fabricated studies). In light of AI-enabled fraud, here are several suggestions for ensuring respondent quality.
Regularly audit your vendors.
As specialists in technology, we often interview high-level cybersecurity executives, so many of whom treat the SolarWinds cyberattack of 2019 as a parable that underscores the importance of third-party risk assessment – regularly evaluating your vendor-partners to ensure they’re applying the same rigid standards and protocols you’d apply to your own systems and network. The same rule applies to third-party recruitment vendors: How are they validating respondents to ensure they are who they say they are? Do they require respondents to include a LinkedIn (for B2B studies) or other social media profile? Do they require a work e-mail address, or just a personal e-mail address? Do they collect and trace the respondent’s IP address? Will they agree to take some or all the above steps to ensure optimal respondent quality?
Lay eyes on your participants.
While online/virtual research is the norm and can be enormously beneficial in helping you reach difficult recruits, AI makes it much easier to game qualitative interviews. We suggest requiring respondents to join interviews with their cameras on, to make it harder to impersonate a respondent and weed out suspicious activity early on.
Make a case for live moderators.
While it’s true that AI-moderated research is faster, cheaper and more efficient than live moderation, the “garbage in, garbage out” gospel applies. On the front end, we recommend video interviews with a live moderator for at least a portion of your qualitative interviews, potentially most or all of them if it’s a hard-to-recruit audience that’s highly incentivized for study participation. On the back end, AI-moderated interviews, like quantitative survey data, are likely to require stringent data cleansing to ensure data quality and fidelity of your insights. If the response consistently sounds too good to be true – is too articulate, too comprehensive, too positive, too rehearsed-sounding – it may be the work of a chatbot. A skilled researcher knows what clues to look for, so make sure you’re baking in human judgment, even if you’re not using a live moderator to conduct every interview.