AI and marketing research from the Quirk's archives
Editor's note: Is there a topic that you'd like to see featured here? Reach out to Quirk's Editor Joe Rydholm at joe@quirks.com.
Welcome to our new feature! In each issue, Pro Insights will offer a handful of tips and commentaries on a single topic, drawn from our vast library of past magazine and e-newsletter articles. Future installments will include customer satisfaction research, health care research and B2B research. Let us know (joe@quirks.com) if there are subjects you’d like us to tackle. As always, you can start your own search by accessing these and thousands more articles for free at www.quirks.com.
Using AI to develop category associations
Diane Lauridsen, head of consumer insights at UScellular, wrote about her team’s efforts to assess generative AI’s ability to capture category associations and to better understand its ability to replicate primary research (“Leveraging generative AI for research insights”). The overarching lesson: Generative AI (with guardrails) can be used in research to provide insights.
In this case study, AI was used to successfully develop a list of category associations to be used in brand density message strategy research; however, one question didn’t provide the breadth or depth of information. In short, generative AI isn’t a one-and-done solution. Rather, leveraging AI is more effective for generating insights when asking multiple questions and being specific in how the question is worded, as well as leveraging multiple AI tools to bolster confidence in accuracy.
To be a bit more specific, when asking a simple question of, “What comes to mind when you think about [category]?” AI generates broad, rational (head) themes that are scattered and lacking commonality. In comparing results from AI to results from primary research using the same question, consumers provided descriptive words, images, colors and emotive phrases to describe the category. This led to a learning that it is best to ask very specific questions, drill down to get to specific words, associations and imagery that consumers would use to describe a brand until all variations are exhausted.
By doing this I was able build a full understanding of a category using AI – asking questions such as, “What images come to mind when you think about [category]?” and, “What specific features come to mind when you think about [category]?”
Open up the black box
In “How marketing researchers can evaluate AI applications,” Lisa Horwich explored how being able to explain and interpret AI output helps researchers understand the purposes and potential impact of an AI system.
Right now, for systems that are obscure (remember many AI foundation models are proprietary to preserve their IP) data goes into a “black box” and a result comes out, often without any rationalization. AI systems that are explainable can answer the question of “how” a decision was made within the system. They can rationalize the output by giving us the steps it took to reach the answer. My favorite example is an AI system which has an input of a bug and says it’s an insect without any explanation vs. one that shows the bug and explains that it has six legs, therefore it is an insect – not an arachnid.
The advantage of explainable systems is they can be debugged and monitored more easily; and they tend to have more thorough documentation, auditing and governance. This is especially important for highly regulated industries like health care and finance.
One thing to keep in mind is that systems don’t necessarily have to be explainable during processing – we can interpret them after the fact. Ask yourself, does the analysis or output make sense? In the case of qualitative research, is the output consistent with what you heard in your interviews or focus groups. For quantitative research, does the analysis match the data collected?
Laddering-up helped hone a survey chatbot’s performance
In “How to successfully approach generative AI applications,” Rachel Dreyfus wrote about what she learned after completing two quantitative projects for two different clients, where the surveys embedded an AI chatbot to converse with respondents.
I had the option to provide coding terms and topics upfront to create the large language model. We would then be able to update the model with additional terms and topics after the soft launch. I lost time trying to guess the likely conversation themes and topics. When we pretested that survey version, the chatbot probed on the model terms that I fed it rather than follow the organic terms surfacing from the respondent conversation. I ended up abandoning my preset terms.
What worked better was to structure the conversation to use the moderator’s “ladder-up” approach, whereby the chatbot repeats the response and probes a step further on feelings and perceptions provided by the respondents. With this technique, which nearly imitates a focus group moderator, respondents feel “listened to” and provide more detailed responses than we’d typically get from flat open-ended questions, such as, “Why did you rate the ad ‘very high’ appeal?” We also had the opportunity to ask “why” questions designed to investigate emotions, including, “How did the ad make you feel?” and “What images or phrases in the ad made you feel that way?” Connecting the respondent’s side of the conversational probes creates a rich and more insightful paragraph than a traditional open-ended verbatim response.
Infrequently, the chatbot missed the mark; fortunately, conversations quickly recovered. It usually happened when a respondent answered a question with another question (possibly using sarcasm). For example, one response about the ad’s copy was, “What does this even mean?” and the chatbot promptly responded with the textbook definition of the tagline. We would have preferred, “What do you think it means?” So, the tools are not quite human, yet. And, because the themes can be both positive or negative in sentiment, the multiple-choice questions act as the guardrails needed to filter and separate the likes and dislikes on the back end.
How do corporate researchers feel about AI?
As we reported in “Q Report respondents opine on AI, pain points and future plans,” when asked several questions about AI, respondents to the 2024 Quirk’s Q Report survey were generally clear-eyed and realistic about the technology’s threats and promises, as evidenced by these two open-end responses:
“AI (if it can deliver on the promises made) will likely have a big impact on the function. Inside client companies, DIY survey platforms were the first wave that democratized the insights tradecraft. That made the process of conducting quant research doable for the average marketer. They still needed guidance. They needed to understand best practices. They needed to understand the philosophy and underpinnings of good research. AI is the second wave and MAY provide the guidance and thought leadership to execute good research. If true, the average insights person is going to have to redefine the value they are bringing to the table. This disruption may bring good."
"It depends on what your MR team is doing. If you’re spitting out rote tracking reports every quarter/month/etc., then AI absolutely puts your team/role at risk. But if your team is trying to synthesize and elevate a blend of primary, secondary, cultural trends, behavioral data, etc., into actionable insights for specific business needs/decisions, I think AI elevates that type of team/role. Insights and MR roles are likely going to have to evolve, but on balance, I think AI will elevate the industry. None of us got into MR/insights because we loved doing those big tracking reports! I think AI will free up many insights professionals to do more of what many of us love about being in insights – telling deep, human stories with data that will impact our respective businesses."
Where do you begin with AI?
For brands looking to dip their toes in the AI water, Thor Olof Philogene (“A guide to generative AI for insights”) suggested that a good place to start is by examining the areas where you’re naturally drawn to using these tools.
Before you invest in any solution, you want to make sure that generative AI will actually fit into your workflows. Likewise, it’s a good idea to gauge how open you and your team are to incorporating these technologies. This will help you determine if there need to be more guardrails in place or, conversely, more encouragement to experiment responsibly.
Sketch out the inefficiencies in your workflows and explore whether it’s something you could automate in whole or in part. These are likely areas where generative AI could offer your team a major productivity boost.
Chances are you don’t have time for endless experimentation. A good way to focus your exploration is to look at the top priorities for your team and your organization and focus efforts where they will have the most impact.
Be sure to clearly outline the risks for your function and organization and don’t hesitate to get advice from relevant experts in tech or security. Once the main risks are defined, you can align on the risk level you’re willing to tolerate. While minding the risks, also don’t be afraid to ask the good kind of what-if questions. If you see opportunities, be brave enough to share them. Now is the time to voice them. Likewise, listen to other parts of the organization to see what opportunities they’ve identified and see what you can learn from them.
It’s our firm belief that the future of insights will still need to combine human expertise with powerful technology. The most powerful technology in the world will be useless if no one actually wants to use it.
AI’s impact on research agency staffs
Of course, research firms are also seeing their worlds upended by AI and its related technologies and in his article “How AI will transform research agencies and their offerings” JD Deitch highlighted the impact AI is having and will have on company headcounts.
Since the dawn of market research, there has been one and only one way to build scale: through labor. Even in the digital age, the largest research firms have been those with the ability to muster battalions of researchers to design, run and interpret research. There was hope that insights platforms might change this. Yet while upstart firms have built competent, user-friendly, more labor-efficient platforms to execute research, they essentially transferred the labor problem to the client.
Advances in AI are now eliminating large labor pools as a necessary factor for scale in a way that even non-AI-based automation and DIY platforms could not, including those that previously required extensive human oversight. Today, there are already viable companies commercializing AI products that span the entire research process: interpreting clients’ business questions; creating a research brief; designing the research; fielding the research; processing the data; and reporting and interpreting both quantitative and qualitative findings
This means that AI can now effectively replace a full research and operations team, operating at a scale previously unattainable with human labor alone. This evolution from labor-intensive projects to AI-driven products marks a pivotal transformation in the industry. First-movers and disruptors who are not starting from scratch but are instead leveraging AI throughout the entire workflow have a significant advantage. These firms are positioned to take market share by integrating AI comprehensively, rather than using it for specific elements only, which ensures long-term success.
AI probing gets respondents talking (or typing)
Contrary to expectations, Eric Tayce argued in “How AI can actually make research more people-centric,” AI has much to offer in terms of adding some warmth to survey-taking.
Researchers have long acknowledged the limitations of survey research and its inability to re-create the experience of making real-world decisions. This is an area where artificial intelligence can help. For starters, AI allows us to minimize the unnatural artifice of survey research through conversational experiences via chatbots, even if just for small portions of the survey. For instance, our own experimentation shows that following open-end responses with conversational AI-powered probing leads to an average of 270% more unstructured data being collected from respondents.
Organizations can also use generative LLMs to mimic natural conversation through iterative questioning techniques that can capture a much wider range of consumer perceptions and opinions than traditional approaches. In fact, we’ve found that properly trained chatbots with well-defined guardrails can reliably identify optimal price levels, investigate decision drivers and generally deliver a richer experience for the respondent. The humanizing trend sets the tension and artificial intelligence solves for it.
In addition, AI tools can more effectively analyze unstructured data versus traditional methods, parsing out more organic, more human insights. Unstructured data has traditionally held limited business value for organizations, simply because the methods for analyzing it are either computationally too complex or logistically too time-consuming. However, AI’s massive computing power has removed this barrier. Researchers are unlocking new value by using AI-powered algorithms to execute techniques like sentiment analysis and theme detection on unstructured data. These deliver respondent-level indicators that can be used to predict behaviors or to develop targeting strategies.