Listen to this article

AI and insights 

Editor’s note: Monika Rogers is VP, growth strategy, and Seth DeAvila is AVP, insights and strategy operations, at market research and data analytics firm CMB. 

In Part 1, we shared the four pathways where AI has delivered meaningful value inside the insights function – from scaling qualitative work, to automating research processes, to building interactive personas, to integrated data and strategy through agentic systems. These pathways taught us a fundamental truth: AI delivers its greatest impact when thoughtfully implemented with human expertise and insight.

In Part 2 we’re digging deeper. Scaling AI isn’t only about methods. It’s about the culture that supports experimentation, the governance that ensures responsible use and the evolution from ad-hoc tools to agentic systems that will increasingly shape the future of insights.

Building the right culture for innovation

Gaining momentum with AI wasn’t a given. We started with a philosophy: choosing the right tool mattered, but how we applied it matters just as much, if not more. Early reactions ranged from excitement to skepticism, and many were unsure where to start. A pivotal breakthrough came when one team member turned a tedious 10-minute workflow into a two-minute AI-assisted process. That single demonstration shifted perceptions: once leadership recognized the ingenuity and provided clear guardrails for broader experimentation, reluctance began to fade and excitement began to grow about what was possible. By encouraging questions, surfacing concerns and resolving them collaboratively through testing, we opened the door to AI innovation the right way, with transparency and evidence.

This early success sparked our innovation sprints. With a small oversight team investing only a few hours a week, we created a safe environment for broad scale rapid experimentation. People saw how AI made their work easier, not because they were told how to use it, but because they discovered its value themselves.

As comfort grew, we expanded enterprise ChatGPT licenses, launched structured training, held weekly office hours and shared demo videos and one-pagers for different learning styles and adoption mind-sets. We also introduced “flash polls” to measure what was working, what wasn’t and where teams needed support.

What began with a group of sprinters became a confident culture of experimentation. As skills grew, so did ambition. Sprint teams started researching more complex issues, finding new workarounds and looking at how far they could extend their applications. Late adopters found security in applying vetted GPTs where they could follow a clear process and get predictable outcomes. Not every attempt was a success, but regardless of the outcome everyone felt part of the “always on” innovation culture.

In retrospect, the role of AI in our work naturally progressed through three stages:

  • Acceleration – AI accelerates existing time-intensive tasks, reclaiming researcher time.
  • Improvement – AI improves our work, improving synthesis and sharpening interpretation.
  • Creation – AI unlocks entirely new ways to integrate, activate and extend research.

Our process allowed teams to take the fastest path to delivering value and layer on complexity as they learned. 

AI + insights shift: From tools to agents 

If 2024 was the year of embedded AI tools, 2025 became the year we began working with custom GPTs and agents. That shift from isolated prompts to orchestrated systems offers exciting possibilities. As recent McKinsey and AWS analyses note, these systems can interpret signals across datasets, test hypotheses and propose next steps within human-defined guardrails. But as our two pilots revealed, the real impact emerges only when they are grounded in strong research fundamentals and governed by expert oversight.

For insights leaders, the implications of custom GPTs and agents are clear:

  • Agents don’t replace expertise; they amplify it. Their value depends on the quality of the underlying research and the clarity of the decision frameworks guiding them.
  • Activation becomes ongoing, not episodic. Agents allow insights to be refreshed, recombined and reexamined across time – not frozen in a single report. 
  • Governance becomes strategic. Organizations must define where autonomy is allowed, where human review is required and how to monitor model drift or bias. 
  • AI systems need continuous evolution: Insights teams must prepare to maintain, refine and evolve AI-based systems over time.
  • Internal teams and external partners play complementary roles. Corporate insights teams can ensure effective use and activation internally; agencies can support innovation and offer methodological rigor, multisource synthesis and experienced oversight and training.

Agentic systems thrive when tethered to human judgment and purpose – a view reinforced across IBM, McKinsey and other thought leaders. 

Leveraging strategic governance to build trust

If there’s one universal takeaway to successfully experimenting with AI, it’s this: timing matters. During our innovation sprints, the number of custom GPTs grew quickly. That creativity eventually raised questions: Which GPTs worked best? How were they trained? When should an experimental tool become an approved one? Governance emerged naturally – not as restriction, but as a source of clarity.

As PwC and Accenture have highlighted, governance introduced too early can stifle discovery – too late, and it erodes trust. The sweet spot is when innovation has enough momentum to justify oversight.

Expanding usage systematically also strengthened adoption momentum. The original sprinters became mentors. Training, office hours and shared resources created a community of teachers and learners. The pace of innovation accelerated because people grew together.

We saw four stages as our momentum built in our AI applications and developed governance layers at each stage to balance oversight and innovation*:

Stage 1: Exploration – try, test and discover

Goal: Teams freely test tools and ideas to uncover where AI adds value.
Governance: Light-touch guardrails ensure people can explore safely – with clarity on ethical use, data privacy and appropriate boundaries for experimentation.

Stage 2: Pattern recognition – focusing on what works

Goal: Identify most promising use cases and leverage learnings across what worked.
Governance: A simple prioritization framework helps balance impact, risk and alignment with organizational goals.

Stage 3: Implementation – bringing AI into the organization

Goal: Solutions start becoming part of everyday work. Teams are turning pilots into processes.
Governance: Clear training, onboarding and adoption programs ensure people know how to use tools responsibly and confidently. 

Stage 4: Evolution – continuously learning and improving

Goal: Focus on expansion of agentic orchestration as needs evolve.
Governance: Ongoing feedback loops to ensure exploration continues, development stays strategic and implementation remains effective over time.

*Knowledge management systems and tech stacks become increasingly critical as insights organizations integrate AI into their workflows and products. These are outside of the scope of what we’ll cover in this article.


This staged maturity model – from simple acceleration to full AI-augmented orchestration – mirrors the deployment road map outlined in OpenAI’s whitepaper for scaling enterprise AI. The whitepaper argues that sustainable value from AI comes only when deployments are underpinned by solid infrastructure, robust governance and lifecycle management – all of which became clear to us through our evaluation process.

For clients, effective governance can be evaluated both internally and for external partners. Enterprises evaluating whether to build or buy AI insight capabilities should also consider the skills of their teams. Many client teams excel at activation and organizational context and could benefit from agency skills in methodological rigor, validation of third-party research technology and complex AI-enhanced synthesis. It may be more effective for insight leaders to build internal fluency on effective use of AI for activating insights while leaning on partners to develop new methods, validate accuracy and to conduct specialized research and analysis where nuance matters most.

As internal stakeholders increasingly use LLMs like ChatGPT or CoPilot for DIY insights, insights professionals need to demonstrate how expert-led research elevates the questions, the rigor and the confidence behind the answers in a meaningful way. 

Scaling AI and the value of trusted insights

In just a few years, the conversation around AI has shifted from “How do we test it?” to “How do we use it well?” The early excitement has matured into a focus on systems, people and purpose. We have found ways to use AI responsibly that genuinely elevate decision-making quality, rather than simply accelerate output.

Scaling AI isn’t linear. It moves in waves, with bursts of discovery, reflection and refinement. The breakthroughs come from curiosity and the courage to ask: What if we tried it this way?

Trust follows the same arc. It grows through transparency and performance. The future won’t be defined by who automates fastest, but by who can best deliver confidence and clarity. Therefore, AI success goes well beyond better/faster/cheaper. It’s about building trust in the insights, making them more accessible and applying them more effectively to strategy and decision-making. 

References

  • PwC. Agentic AI: The New Frontier in GenAI – An Executive Playbook. PricewaterhouseCoopers, 2024.
  • Accenture. Six Key Insights for C-Suite Executives to Maximize Return on Agentic AI. Accenture Strategy, 2025.
  • IBM Institute for Business Value. The Agentic AI Operating Model. IBM, 2025.
  • Deloitte. Agentic AI in the Enterprise: 2028 Outlook. Deloitte Insights, 2025.
  • OpenAI. From Experiments to Deployments: A Practical Guide for Scaling Enterprise AI. OpenAI, 2025.