Dr. Heli Holttinen
Chief Product Officer and Co-Founder, Cambri
Management Summary
The problem: Traditional survey-based concept testing solutions mislead brands
New product failure rates remain high because what consumers say in surveys often doesn’t reflect what they do in the market. Ultimately, consumer brands end up killing many of the potential winners, allowing failing concepts to slip through.
The solution: Launch AI overcomes the say-do gap to help brands increase revenue from new products
For five years we at Cambri have been working to solve this problem with Launch AI. We combine consumer research with data science to predict real market behavior. Launch AI integrates survey KPIs, open-ended feedback and POS data to assess a product value proposition’s true sales potential. Trained on survey data and real new product sales outcomes, Launch AI’s machine learning models identify likely successes and failures with 81% accuracy, helping brands at least double new product sales per average store (ROS – Rate of Sales).
The evidence: How product concept testing drives in-market success (or failure)
Theoretical foundation: While strong in-market execution is essential for new product success, everything begins with the product value proposition – and that is exactly what concept testing evaluates. If the product value proposition doesn’t resonate with the target audience, nothing else will matter. They won’t notice the product and will overlook the marketing. Those consumers who do notice the new product regard it worse than what they already use and walk away. If you don’t get the product value proposition right from the start, everything that follows is set to fail.
In-market evidence: Using POS data, we analyzed new product performance in the food and beverage sector across Europe and the U.S. The results are very clear: a new product must win within the first six months. This is when the product value proposition (product concept) carries the most weight. During this time, target consumers must regard the product as worth trying. If they don’t, our data shows that the product fails.
Problem: Traditional product concept testing paradigm stubbornly believes in what consumers say
Traditional survey-based concept testing methods rely on Likert scale questions and KPIs like purchase intent, relevance, uniqueness, believability and expensiveness. Based on KPI scores against category benchmarks, tested product concepts are classified into different profiles, indicating the size and type of in-market potential. The underlying assumption is that consumer responses predict real-world purchase behavior.
Defenders of this approach argue that it serves its purpose. Product concept testing is meant to provide an early signal of a concept’s relevance, differentiation and consumer interest to screen in/out and rank ideas. Product concept testing cannot even predict a new product’s in-market success. After all, it takes place early in the innovation cycle, and so much changes after that. It is the product experience and in-market execution that ultimately determine success!
These arguments are convenient shortcuts reflecting intellectual laziness. The fact is, the traditional concept testing method was born decades ago before the era of machine learning, AI and integrated data. They rely on quantitative survey data and traditional statistical methods solely because that was all the research industry had access to.
Sticking with outdated beliefs and methods prevents consumer brands from unlocking the full growth potential of new products.
Solution: Launch AI predicts a product concept’s sales potential by using integrated data
Survey data feeding machine learning model: Launch AI combines consumer feedback from both closed-ended KPIs and open-ended responses, enhancing the accuracy of sales potential predictions and uncovering the key drivers of product success and failure. It uses Likert scale questions to capture high-level reactions, while proprietary natural language processing (NLP) models analyze deeper insights from open-ended responses. Our NLP taxonomy covers themes proven to influence consumer decisions.
Machine learning model using POS data: By using POS data we analyze how well new products performed against the rest of the category. The outputs of our standardized product performance analysis feed into the Launch AI model. Our focus is on the first six months after launch so that the Launch AI model can capture, isolate and evaluate the impact of product value proposition (concept).
on new product in-market success. The role of the product value proposition (concept) is to generate product trials while product experience aims to generate repurchase.
Measurable business impact delivered
Eighty-one percent accuracy of screening in likely successes and screening out failures: We monitor the accuracy of each LAI model version using a validation product dataset of both survey data and POS data. Model accuracy is assessed by comparing actual in-market performance with the Launch AI model’s predictions for the same product concepts. The latest Launch AI model version achieved an accuracy of 81%.
Launch AI helps double new product sales per store: By identifying failures at 80% accuracy and successes at 83% accuracy it helps to at least double product sales per average store (ROS).
Actionable advice to screen in/out, rank and refine
Success Profile and Launch AI Score: Success Profiles help to screen in and out concepts while Launch AI Score (0-100) helps to rank them. Concepts are classified, based on their predicted weighted rate of sale (ROS):
- Success: Recommended for screening in. Only 15% of new products achieve this.
- Strong start: Concepts showing potential but requiring iteration, based on sales targets.
- Failure: Recommended for screening out. Sixty-five percent of new products fall into this category.
Launch AI drivers: They help to identify areas that have the biggest impact on in-market success. Launch AI reports 10 drivers providing:
- Impact of each driver on the Launch AI Score (in % points).
- AI summary of key positives and negatives.
- Easy access to raw open-ended responses to validate consumer feedback.
- GenAI-generated new product value propositions based on consumer feedback.
Curious to dive deeper? Access an example data set and insights in our e-book: www.cambri.io/resources/ai-outperforms-traditional-benchmarks.



