Listen to this article

Making real progress with AI 

Editor’s note: Monika Rogers is VP, growth strategy, and Seth DeAvila is AVP, insights and strategy operations, at market research and data analytics firm CMB. 

As AI becomes embedded in every corner of the enterprise, insights teams face an inflection point. With stakeholders increasingly able to generate answers on their own, our value is no longer defined by speed or output volume, but by how well we elevate the questions asked, deepen the rigor applied and strengthen the confidence behind organizational decisions.

Our own journey with AI reflects this shift. When we first began experimenting with generative AI tools, the results were both exhilarating and exasperating. Over time, AI evolved from an experimental tool into validated solutions that are embedded in our workflows and support clearer decisions and stronger client outcomes. 

In this article, we’ll share the four pathways that helped us scale AI across our business, the lessons we learned about balancing human and artificial intelligence and the cultural shifts we fostered to make AI innovation possible. We’ll also connect these experiences back to the new reality facing corporate insights teams: navigating change, governing AI responsibly and demonstrating greater strategic relevance in organizations where everyone now has access to AI but not everyone knows how to use it well.

Four pathways to scale insights with AI 

Our AI journey didn’t start with a road map, but rather a commitment to exploration and learning. We experimented with top-down and bottom-up approaches to innovation, using internally developed and externally vetted tools. Along the way, we converged on four pathways where AI adds significant value within insights (see Exhibit 1).

1. Scaling the qualitative tool kit

The first approach we took focused on using new and existing third-party research platforms to embed AI into qualitative workflows. As we vetted a steady stream of gen AI enabled solutions, we found strong performers in AI-moderated interviews and conversational surveys that rely on AI probing. 

Both AI moderated interviews and conversational surveys excel when we have a well-defined set of research objectives that can translate into a structured discussion guide. The logistical advantages are significant when you need to conduct these interviews at scale, across multiple markets and in multiple languages. Across multiple studies, AI moderators doubled the number of interviews possible within our timeline while maintaining reasonable quality. Conversational surveys yielded 75% longer answers and 46% more themes than traditional open ends. 

These same tools revealed their limits. AI struggled when nuance, emotional subtlety or cultural context were central to the learning goals. We saw AI miss tone shifts, hesitations and the micro-stories that matter in foundational or exploratory research. Getting better outcomes required considering factors like urgency, context, topic complexity and decision risk. As third-party tools and the underlying LLMs evolve, we continue to explore different combinations of AI and human insight to improve qualitative outcomes. 

2. Automating research processes

In addition to evaluating our qualitative tool kit, we shifted inward to improving our internal processes. It quickly became clear that we had far more ideas for using LLMs to assist in our work than our internal AI team could handle. So, we launched innovation sprints using rapid, six-to-eight-week cycles where volunteers from across the company tested real use cases using ChatGPT and CoPilot. Participation snowballed: 20 employees in Sprint 1, 40 in Sprint 2 within eight coordinated teams. By Sprint 3, nearly every employee wanted access to LLMs, convinced they were falling behind without it. 

Teams built custom GPTs to interpret tables, draft questionnaires, generate surveys, summarize insights, assess response quality and more. Productivity gains were real, but the biggest breakthroughs came not from full automation but from assisted intelligence, workflows where humans and AI collaborated. As Ethan Mollick describes in his book, “Co-Intelligence: Living and Working with AI,” some sprint teams became “centaurs,” dividing tasks between human and machine; others became “cyborgs,” fully integrating AI into every step. 

As the number of custom GPTs exploded, so did opportunities to reinvent nearly every step of the research process, promising speed and efficiency as well as new opportunities to add value in the research process. For example, GPTs changed how we check questionnaires against client research objectives and identify potential gaps. They improved our process for data analysis and collaboratively uncovering insights. And GPTs improved our team’s approach to translating findings and insights into actionable recommendations.

3. Building interactive AI personas

Our next pathway focused on building narrowly defined AI personas trained from multiple studies that end clients could interact with to support daily decision-making. Early on, we explored synthetic data platforms to create digital twins, but we weren’t convinced that the outcomes were strong enough. 

We then discovered a solution for building AI personas with a rich mix of data sources (qualitative, quantitative, primary, third-party, etc.), which we tested through internal pilots and client partnerships. In our pilot with DoorDash, we quickly learned that it wasn’t enough to load persona data into a third-party solution. Getting meaningful and responsible output required ensuring the training data covered the full persona dimensions, validating and augmenting results against new datasets, and identifying the use cases where teams could rely on the persona independently versus where expert prompting and curation were essential. The work reinforced an important point: while AI personas and other AI tools can accelerate decision-making, without the right oversight they can just as easily lead teams astray.

4. Integrated data and strategy

One of our most recent experiments centered on our self-funded brand trust program. Using our proprietary results as a foundational dataset, we built a suite of tightly scoped tools, one trained as the “research expert” on our brand trust dimensions, others ingesting social media, news and competitive signals to apply that expertise to evaluate current market activity. When orchestrated as agents, this system can surface patterns, test hypotheses and generate strategic recommendations to build trust into communication, CX, product and corporate strategy. 

Bringing this type of mixed-method synthesis to life required building skill sets and using new platforms that support agent orchestration and vibe coding. The prototype shows clear promise for delivering client ready tools with richer synthesis, structured decision frameworks and a way to extend insights after fieldwork is complete. 

And while we began with our own brand trust program, it quickly became apparent to us that this approach could be applied to other topics and other research programs. All of these illustrate how AI-powered solutions can elevate the role research and researchers play. We continue to work with clients to advance AI supported data strategy and bring new solutions to market. 

Using AI to meaningfully scale insights

These four pathways taught us how AI can meaningfully scale insights — but they also revealed something deeper. Scaling AI isn’t just a tool problem; it’s a people, culture and governance problem. 

In Part 2, we’ll explore how we built a culture capable of experimenting with AI, how governance emerged as a strategic necessity and what agentic systems mean for the future of the insights function.

Stay tuned for Part 2: Building culture and governance in an AI-driven enterprise.

References:

  • Mollick, E. Co-Intelligence: Living and Working with AI. Penguin Random House, 2024.
  • AWS Insights. The Rise of Autonomous Agents: What Enterprise Leaders Need to Know About the Next Wave of AI. Amazon Web Services, 2025.
  • McKinsey & Company. The Agentic AI Opportunity. McKinsey Quarterly, 2025.