Listen to this article

Predicting new product success with AI-powered concept testing 

Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity. For the full session, please watch the recording. 

Concept testing is an important step to ensure a successful launch of something new.  

During a session in the Quirk’s Virtual – AI and Innovation series on January 29, 2026, Dani Kamras, co-founder of Cambri, shared how the organization uses AI to predict new product success in market. He touched on how this method of concept testing differs from traditional and what the organization has found to be the best measure of success.  

Session transcript 

Joe Rydholm

Hi everybody and welcome to our presentation, “Predicting new product success with AI-powered concept testing.” I'm Quirk’s Editor, Joe Rydholm 

Thanks for joining us today. Just a quick reminder that you can use the chat tab if you'd like to interact with other attendees during today's discussion and you can use the Q&A tab to submit questions to the presenter, and we'll get to as many as we have time for at the end.  

Our session today is presented by Cambri. Dani, take it away!

Dani Kamras 

Great, thank you.  

Welcome to our session everyone. Great to have you all joining. Great things from a cold and snowy Stockholm.  

My name is Dani Kamras, one of the co-founders of Cambri. I will be taking you through our methodology and approach to concept testing, where we use AI to bring together point of sales data and survey data for a more robust and accurate prediction of how tested concepts will actually perform in market.  

So, that's the topic of today. Let's get started.  

Just a few words about us. Mainly we work with CPG companies throughout the whole innovation process. Many of the major CPG players, Coke, Nestlé, Carlsberg, work with us. We help them set across the innovation process. But I would say most of the work we do is in the early stage of innovation in idea screening, concept development, concept validation. And that is where our focus will be today as well with this presentation. 

So, if we first take a step back and just highlight what the role of concept testing is because I think this is also important in how we have set up our methodology. 

The role of the concept testing is, of course, to drive trial of the product by positioning it so that consumers see the product as relevant and of value to them. Then the role of concept testing is, of course, to understand if the concept has potential to drive trial among the target group but also understand why or why not and how to improve it to increase the chances of it in actually driving trial.  

The stronger concept you have, the more efficient your execution, meaning the distribution, marketing and other things that you invest in it should be to drive sales. So, if you have a strong concept. You should be able to get more bang for the buck with your investments compared to if you have a weak concept. 

So, concept as results should really help you back the right horse, so to say. But then if we look at the other part, what is the business impact of a concept; what point of sales metric best tells us the performance of the concept? Because, as you know, a lot of other things go into a product succeeding than just the concept.  

So, what is a good point of sales metric to measure the concept’s success and strength? How we define success is that the higher velocity or rate of sales, so basically average sales per average store where the product can be found, for the product at six months after the launch, the more successful we consider it to be. So, we measure success of the concept through velocity or rate of sales six months after launch. 

And why is that?  

We see that velocity is the best point of sales metric to really isolate the concept's impact on the product’s success because it neutralizes for distribution. And when we look at six months after launch, it also minimizes repurchase behavior, meaning if the actual product then delivers on its promise. So, if you get a high velocity when you launch a product, that's usually where the concept has the biggest impact.  

A high velocity is also important because then your risk of getting delisted and it actually helps you then earn more distribution going forward. 

That's a bit on how we view concept testing and concept evaluation, and how we define success for a concept or strength. 

So, what do we do differently compared to traditional concept testing and why? 

Looking at four different points here that we want to highlight and go through today. The first one is we are benchmarking against real market performance. So, instead of benchmarking against test results of concepts that maybe never even got launched to market and a lot of concepts that became failed products in market, we are benchmarking against true product success in market of recent launches to really know if the concept will drive true trial or not.  

So, that is the first part which we see is really important that you are benchmarking on something that is truly relevant. 

The second one being that we're including all open-ended data and analysis. So, instead of subjectively cherry picking the best themes, we are using all open-ended data in the survey in a scientific and a systematic way to really quantify the qualitative feedback's impact on the concepts potential to sell well. 

The third part is that we are picking the concepts apart to relevant drivers. So we picked the concept apart in the results to nine drivers to understand why it is performing as it does and how it could be improved. The model really understands which drivers are more or less important in your category in order to drive a trial or high velocity basically.  

The fourth point is we are using a validated and accurate machine learning or AI model. So, we use a model that has been trained on a combination of points of sales data and concept test data. So, survey data and with full transparency on how accurate the model is to predict the concepts potential in market. We are fully transparent about what type of correlation we see with our model in predicting these concepts in a way how they then actually end up performing in market. And with this more robust methodology we see constantly across categories and regions that we can get more than three times higher accuracy of picking the right concepts to take to market. 

Also with this robustness we get a better understanding of how to optimize the concept leading them to more successful product launches and growth basically.