Tips for ensuring the integrity of your market research

Editor’s note: Jim Longo is co-founder and chief strategy officer at market research firm Discuss. 

Since the beginning of this year, generative AI has hit like a tidal wave. Market research has entered a generative AI world, and there’s going to be winners and losers. The winners will be those who embrace AI, but this article highlights the other key element to winning in this new world – preparing for and navigating away from the new risks that will now be part of market research. This article will help provide a navigational map around those risks. 

There’s no doubt that artificial intelligence is transforming global markets and enterprises of all sizes. According to Forrester Research’s 2024 planning guide for technology executives (registration required), 51% of global business and technology professionals at future-fit organizations say their organizations have implemented and are currently expanding their investment in AI infrastructure. Forrester also forecasts that 59% of AI software spend will be generative by 2030. 

Generative AI has been a game changer for both quantitative and qualitative market research in accelerating the development of survey questions, generating summaries and creating discussion guides. By using machine learning algorithms, generative AI can identify patterns and themes in large data sets, assist with coding and categorizing data and help researchers identify emergent themes. Brands and agencies are taking notice and are beginning to implement new generative AI features into their market research processes to improve efficiency and gain faster time to insights.

Fraud, AI and quantitative and qualitative research 

Despite the many benefits of generative AI for consumer research, it also comes with risks that are being revealed in the form of fraud. Unfortunately, fraud is not new in the research industry, in fact, I wrote on this topic earlier this year (“Online fraud in marketing research"). Since that time, it has become more rampant primarily due to the rise of generative AI.

According to an ESOMAR SWOT on AI, “Fraud is already a concern, but it could be a much bigger problem. AI might remove real humans from the research process leading to misinformation, deep fakes and deliberate misinformation.” 

Generative AI is being used fraudulently in both quantitative and qualitative research. For example, fake panelists posing as real customers are responding to survey questions that sometimes are not even in their own language. Often it isn’t even a human answering questions but rather a computer bot that is imitating human behavior. 

It is also found in text applications like online bulletin boards where respondents use generative AI tools like ChatGPT to reply to a thread rather than provide their own insights. We had a client recently who said that they know their articulation questions are being filled out using AI because the answers are much longer now than before. 

What can you do to minimize your fraud risk in this new generative AI world?

Ideas that have been suggested for initiatives that ESOMAR and others should do include establishing rules about transparency and adding watermarks, creating ethics committees and increasing the skill sets in the domain of AI.

Let’s look at four strategies that brands and agencies can take to help combat fraud:

Enhance screening for panels.

By implementing enhanced screening measures, researchers can verify the identities and backgrounds of participants more effectively. This can include making additional phone calls to screen the participants, e-mail verification, validating demographic information, checking IP addresses and cross-referencing data against external sources. You can also create a standard fraud statement that is provided at the beginning of the screening process to set expectations for the integrity of the information provided and to outline the consequences of not complying such as being blacklisted or not receiving compensation.

Rebalance quantitative and qualitative research.

Quantitative research has been valuable for many years, but it has often been over indexed because of the ease in summarizing the data (i.e., it’s mostly structured data with 1-5 scales, etc.) using Excel and therefore making it easier to scale. The problem is that now it is more difficult to trust the responses to quant as many are being driven by generative AI. Even if 10% of the data is fraudulent, that can significantly skew the results, and there’s evidence now showing that 30-to-50% of surveys being completed are being done using generative AI.

Unlike online surveys, qualitative research is a valuable method for exploring complex topics at length and as such, it is harder for fraudsters to penetrate. In person or virtual live interviews and focus groups allow researchers to get to know the participants and to dive deeper to understand their needs and motivations and to verify their identity firsthand. This human interaction significantly reduces the risk of fraud and allows researchers to more easily validate the integrity of the responses. In addition, due to the unstructured nature of quantitative data, generative AI can be beneficial in helping to summarize and make sense of the data.

Incorporate video.

Incorporating video into your research methodology lets you see a person online and allows them to answer questions in real time and/or show a product on camera during an unboxing. For obvious reasons, it is much harder to fake this type of interaction and some platforms include additional checks and balances like removing the ability to blur your video background to ensure the participant is in a home or office environment.

Use technology to detect fraud.

Most of today’s qualitative and quantitative software applications have security measures in place to protect the participants’ personal information. In addition, it is important to look for platforms that also include verification tools that can detect false or AI-generative responses. These tools can flag things like if responses have been cut-and-pasted or if the response time for questions is significantly skewed. 

Preparing for the pitfalls of AI fraud 

Fraud will continue to be a challenge for our industry, leading to bad data quality and skewed results. Ensuring the integrity of your market research should be a top priority. It is the only way to get the true value out of your investment and gain insights that can transform the customer experience. 

As an industry, we need to embrace the benefits of AI and at the same time be leery of its ability to infiltrate and degenerate the market research process. Let this be a cautionary tale that if you aren’t already taking action to minimize your risk of fraud, the time to act is now.