Listen to this article


In a year when most insight teams are being asked to do more with less, innovation is one of the few lines in the budget that can’t simply be switched off. History shows that organizations that continue to invest in innovation through downturns tend to outperform their peers in the recovery.

What has changed is the way we generate the evidence behind those innovation bets. Traditional innovation research, built around a handful of large, late-stage tests, is colliding with shrinking budgets, shorter timelines and rising expectations. At the same time, AI is opening up entirely new ways to learn, from rapid synthetic screening to AI-assisted qual and automated insight extraction.

This article looks at how we got here, what’s breaking in the old model and how AI is reshaping innovation research in practice.

The old model: strong governance, slow learning

For many CPG and consumer brands, the default innovation framework is still some flavor of Stage-Gate or phase-gate: a linear process where ideas move through discovery, development, testing and launch, punctuated by formal decision “gates.” 

That structure brought discipline to innovation: clear go/no-go decisions; standardized KPIs; risk management and financial control.

But from a research perspective, it also hardwired certain behaviors:

  • Heavy reliance on late-stage validation. Many organizations built toolkits dominated by large, normative concept and product tests just before (or even after) major investment decisions. Learning earlier in the process was often treated as a nice-to-have. 
  • Limited room for iteration. When each full validation test consumes a big slice of budget and weeks of fieldwork, teams become reluctant to run multiple rounds. Concepts are treated as single shots, not evolving hypotheses.

In benign economic conditions, those inefficiencies were tolerable. Today, they’re becoming existential.

Why the traditional playbook is under strain

Three pressures are now converging on the classic innovation research model:

1. Time and budget compression. Insight teams report ongoing pressure to shorten timelines and cut costs, even as pipelines and launch targets stay the same or increase. That makes it harder to justify large, single-shot studies that don’t materially reduce risk earlier in the process.

2. Data quality and reach. Declining survey participation, professional respondents and fraudulent entries are raising questions about the reliability of traditional samples, especially for niche or low-incidence audiences. Synthetic respondents, AI-generated data and AI-driven fraud detection have emerged partly as a response to these issues.

3. Richer questions, not just sharper numbers. Marketers increasingly want to know not only which concept wins but why, for whom and how it should be refined. That demands more iterative learning between gates, plus tighter integration of qual and quant, than many existing toolkits were designed to deliver.

The net result is a growing mismatch: a world that demands more, faster learning and a research model currently optimized for a small number of large, late decisions.

What AI changes: from big bets to continuous learning

AI doesn’t replace good research practice or robust methodology. But it does change the economics of learning. Three shifts, in particular, are reshaping innovation research.

1. From occasional tests to continuous, AI-accelerated learning

Generative and analytical AI are already speeding up many of the slowest parts of the research workflow:

  • drafting and adapting questionnaires
  • summarizing open-ends
  • clustering ideas or claims
  • extracting themes and sentiment from qual at scale

Platforms like Toluna Start, for example, embed AI into surveys so respondents can answer in more natural language, with AI summarizing comments into themes and analyzing sentiment in near real-time.

For innovation teams, the impact is less about automation for its own sake and more about creating space for multiple, smaller learning loops between formal gates:

  • screening a wide set of ideas cost-effectively
  • quickly iterating language, benefits, attributes or RTBs
  • running targeted learning sprints between early and late validation

In effect, AI makes it economically feasible to use the right tool for the “learning” job and a different one for the “validation” job, rather than forcing everything through a single, expensive template.

2. From sample scarcity to synthetic abundance

The most visible change, and arguably the most contentious, is the rise of synthetic respondents. Synthetic respondents (or synthetic personas) are virtual survey takers generated by AI models trained on large volumes of real-world data. They are designed to mimic the attitudes and behaviors of real people, often with rich backstories and attributes, and can be queried at speed and scale. 

Across the industry, synthetic research promises to:

  • dramatically reduce fieldwork time, by minimizing recruitment
  • unlock hard-to-reach audiences, by simulating segments that are rare, sensitive or geographically dispersed
  • enable larger testing spaces, by allowing dozens of ideas, claims or flavors to be screened in parallel, often within hours 

Providers are taking different approaches. Toluna, for instance, has developed synthetic personas built from its first-party global panel, enriched with predicted traits and behaviors so each persona behaves like a unique survey respondent rather than an “average type.” Internal parallel tests have reported high correlations with human samples on key metrics, and early commercial use cases in rapid screening of claims and messages suggest potential cost reductions of around 25-40% versus running only traditional validation tests.

However, there is broad agreement on one crucial point: Synthetic data is not a free pass to skip robust validation. 

3. From dashboards to decisions

The third shift is more subtle but equally important. AI is changing how innovation data is interpreted and used inside organizations.

Where traditional dashboards largely reported “what happened” (e.g., top-two box scores, preference shares, purchase intent), newer AI-assisted systems are evolving towards decision intelligence:

  • automatically surfacing patterns (e.g., which attributes drive choice for specific personas or missions)
  • learning from both synthetic and human tests to optimize the next iteration

Done well, this brings research closer to the way innovation leaders actually think: juggling trade-offs, exploring what-if scenarios and continually reallocating budget towards the ideas with the strongest evidence. 

Toluna as a case in point

Toluna is re-architecting its innovation offer around these AI-enabled capabilities, rather than treating AI as an add-on. This end-to-end agentic AI system includes:

  • Quality measures that enhance inputs, throughputs and outputs to deliver high-quality insights.
  • Rapid screening of ideas, claims and flavors in hours, using synthetic personas built from long-standing first-party panel data and tuned to behave like individual respondents.
  • Agile concept testing using custom templates at scale to provide clients with the flexibility to use their own KPIs.
  • AI question probes, theme extractors and sentiment analysis to extract richer qualitative learning from each test.
  • Flexible servicing models from DIY to full consultative support, so teams can integrate these tools into their existing ways of working.

Crucially, this approach is meeting today’s challenges of increasing scale and decreasing timelines without sacrificing quality. Combining AI tools and solutions with deep human expertise delivers what Toluna calls augmented intelligence.

Other global agencies and platforms are also leveraging AI to compress timelines and turn existing knowledge assets into living decision tools.

Taken together, these moves signal an industry-wide shift – from viewing AI as a bolt-on feature to treating it as core infrastructure for innovation learning.

What this means for insight leaders

The AI wave is no longer something that is appearing on the horizon. The wave has reached the shore and the imperative is to understand how to adopt it in a way that strengthens, rather than weakens, the evidence behind your company’s innovation bets.

Three themes are emerging from early adopters:

1. Rebalancing the portfolio. Many teams are deliberately shifting a portion of spend away from late-stage validation into earlier-stage, AI-enabled learning. The goal is not to cut validation but to ensure that what reaches those expensive gates has already been iterated through multiple rounds of cheaper, faster testing.

2. Creating transparency for synthetic research. In such a new field, there remain many unknowns, diverse opinions and new norms being created. Insisting on transparency regarding external providers’ methodologies, validation and synthetic data training sets will help build knowledge and trust.

3. Building new skills and partnerships. AI literacy is becoming a core skill inside insight teams, as researchers learn to blend traditional expertise with a soon-to-be ubiquitous technological shift. At the same time, relationships with external providers are shifting from transactional “project delivery” towards more collaborative design of insight systems that blend AI, human expertise and organizational context.

Three questions to stress-test your own approach

To close, this isn’t about buying a particular tool or replicating another company’s stack. It’s about rethinking how you learn your way to stronger innovations in a world where AI has changed the speed and cost of insight.

Three questions can help you assess where you stand:

  1. Where in our innovation process are we still relying on one or two big, late-stage validation tests – and what learning are we not getting earlier as a result?
  2. How could synthetic personas, AI-assisted qual and automated analytics help us explore more ideas, claims or territories between gates?
  3. Do we have the right skills and partners in place to treat AI as a core capability in innovation research, not a black box and not a bolt-on?

The answers will be different for every organization. But the direction is clear: the old model of innovation research – slow, expensive and overly reliant on late-stage validation – is giving way to a new, AI-enabled system of continuous learning. The opportunity for insight leaders is to shape that system deliberately, before it shapes them.

To learn more about how Toluna is specifically delivering AI-centric solutions on a global scale alongside our highly experienced insights professionals, visit our AI hub at www.tolunacorporate.com/ai