Future-Proofing Your Research: How Advanced Methods Drive Data You Can Trust
Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity.
Many researchers can relate to the feeling of being under pressure on a project, sifting through mountains of data but still thinking about if you can trust the data.
“This is the core challenge of modern market research,” Gianna Saladino, senior solutions consultant at quantilope said. “We're drowning in data but starving for quality insights.”
Saladino and Lindsey Guzman, Ph.D., senior solutions consultant at quantilope discussed how to feed the need for quality insights in their Quirk’s Virtual Sessions – Data Quality presentation on September 25, 2025.
Guzman and Saladino discussed advanced research methods, survey design and AI’s effect on data quality.
Session transcript
Joe Rydholm
Hi everybody and welcome to our session, “Future-Proofing Your Research: How Advanced Methods Drive Data You Can Trust.”
I'm Quirk’s Editor, Joe Rydholm.
Before we get started, let's quickly go over the ways you can participate in today's discussion. You can use the chat tab to interact with other attendees, and you can use the Q&A tab to submit questions for the presenters during the session. They'll answer as many questions as we have time for during the Q&A portion at the end.
Our session today is presented by quantilope. Gianna, take it away.
Gianna Saladino
Perfect. Thank you.
Hi, good morning, everyone. I'm Gianna Saladino. I'm a senior solutions consultant here at quantilope. I've been here about five years now. I started my research career over at Ipsos. So, I come from more of that traditional agency background.
With that I'll pass it to my colleague, Lindsay.
Lindsey Guzman, Ph.D.
Hi everyone. My name is Lindsey Guzman. I'm a senior solutions consultant here at quantilope.
I've been with quantilope for about four years, but I have 15 years of research experience with a background in academia.
Gianna Saladino
Perfect. So, I want to start by setting the groundwork for us.
I want you to think about the last time you were in the hot seat, tasked with guiding a major product launch. The pressure's on and you're sifting through mountains of data, but a nagging question keeps popping up. ‘Is this data actually reliable? Are people telling me what I need to hear or what they really think? Am I even scratching the surface of their true decision-making process?’
This is the core challenge of modern market research. We're drowning in data but starving for quality insights.
Relying on basic research methods leaves you exposed to biases, which can sometimes lead to costly mistakes, which could undermine your credibility. It's not about having more data, it's about having better data.
Today we're going to show you a different way. We'll explore how advanced research methods don't just add sophistication. They fundamentally improve the quality of your data, making your insights more robust and reliable.
We've seen firsthand how high-quality data can be the catalyst for real innovation. Now, we'll show you how these methods have been applied to our clients in the past. We hope to be able to inspire you to think about how you could use these approaches to improve your own research.
Let's take a quick look at what we'll be covering today.
Our journey begins with the insights paradox, where we'll explore why data quality is no longer just a checkbox, but the foundation of every confident business decision.
From there, we'll dive into the power of advanced methods. We'll discuss why moving beyond simple direct questions is crucial and how advanced techniques are the key to unlocking trustworthy data.
We'll take a deep dive into specific methods like conjoint, TURF and implicit testing, and show you exactly how they reduce bias and provide richer, more reliable insights.
Next, we'll address trust in the age of AI. The rise of AI and automation presents new data quality challenges, and we'll show you how to safeguard your research with modern data cleaning tools and international standards, like ISO 20252.
Finally, we'll talk about how designing your surveys with engagement in mind can lead to better data.
We'll wrap up with a summary of our key takeaways and of course open the floor to any questions you have. Let's jump in.
What exactly do we mean by advanced methods?
Let's start with a quick comparison to traditional research. Think about a simple survey question. ‘How likely are you to buy this product?’
The problem with this approach is that people often give socially desirable answers or fail to accurately predict their own behavior. They might say they'll buy something, but when faced with a real-world choice, a different factor entirely drives their decision.
This is what we call the “Stated Versus Revealed Preference Gap.”
Advanced methods are the antidote to this problem. There are sophisticated, scientifically backed techniques designed to bypass the biases inherent in direct questioning. Instead of asking people what they think, these methods are designed to measure what they truly believe and value.
Let's start with our first method, conjoint analysis.
Conjoint analysis is an advanced statistical technique that gets past superficial survey answers to understand how consumers make trade-offs.
Unlike traditional surveys where people say they want everything cheap, conjoint analysis mimics real-world decision making. We present respondents with different product bundles like a phone with specific features and a price and ask them to choose their preference.
By doing this, we can mathematically determine the hidden value or utility consumers place on each individual feature.
For example, a global snack brand found that traditional surveys gave them useless data. Everyone said they wanted every flavor at a low price. We used conjoint analysis to force consumers to choose between different flavor assortments, package sizes and prices.
The analysis revealed that while low price was a factor, consumers valued a larger bag of their core favorite flavors over a wider variety of flavors. This insight allowed the brand to streamline its product line, cut niche flavors and boost sales by focusing on its best sellers.
Let's move on to our next advanced method TURF analysis.
Imagine you're launching a new ice cream brand. You conducted a survey and found that chocolate and vanilla are everyone's favorite. While pistachio is a distant third, a basic analysis might tell you to only sell chocolate and vanilla.
But what if everyone who loves pistachio hates the other two? You'd be leaving a whole group of potential customers unserved.
TURF analysis is an advanced method used to find the most efficient combination of offerings. It helps you maximize market reach by identifying which products, services or features appeal to distinct groups of customers rather than overlapping with your existing popular offerings.
For instance, a serial brand discovered that their top two sellers appeal to the same customers. By using TURF analysis, they were able to replace one of their best sellers with a new flavor that, while only moderately popular on its own, captured a completely new segment of customers.
This new product lineup increased their market reach and boosted sales by 15% within a year.
Now, our last method for today, let's talk about a powerful method that gets us past what people are willing to say and into what they truly believe. Implicit association testing.
Think about it, how often do our actions align perfectly with what we say?
We might say we want to eat healthy, but subconsciously we're drawn to the comfort of fast food. We might say your brand is innovative, but our mind is actually associated with outdated ideas.
The problem with traditional surveys is that they only capture our conscious, rational thoughts. They're open to something called ‘social desirability bias,’ where people give answers they think they're supposed to give.
The solution is implicit association testing. It's not a survey but a time test that measures the strength of mental associations. When two concepts are strongly linked in someone's mind, they sort them together faster. This helps us understand what people truly believe, not just what they are willing to say.
A major snack brand of ours, that we collaborated with, had a new package that received great feedback in surveys. Consumers described it as modern and appealing. However, the brand was concerned about how the change would be received by their loyal customer base.
Our implicit association tests showed that while people liked the new look consciously, their subconscious minds linked it to the unfamiliar and impersonal. Losing the brand's core association with comfort and familiarity.
Based on this insight, the brand made a crucial decision. They didn't scrap the new design but instead refined it by incorporating key visual elements from the original packaging which triggered those feelings of comfort and nostalgia. This updated design tested even better proving that implicit insights can help a brand evolve without losing its core identity.
Lindsey Guzman, Ph.D.
Alright, so as we move further into a world powered by AI and automation, data collection has become faster and more scalable than ever before. But with that speed comes a significant new challenge.
How do we ensure the data we're collecting is from real people and not from bots or sophisticated fraud rings?
The old ways of data quality, which are simply screening for speeders or straight liners, are no longer enough. The bots are smarter, and the fraudulent schemes are more advanced.
So, how do we build a foundation of trust in this new landscape?
First, we have to start with a solid framework. This is where international data standards, like ISO 2052, come into play. They are a set of best practices that provide a reliable framework for quality assurance throughout the entire research process.
We need to move beyond those simple basic checks because a bot can be programmed to answer a survey at a normal speed and to avoid giving the same answer repeatedly.
This is where AI powered data cleaning comes in. These tools can analyze patterns that are invisible to the human eye.
For instance, they can use machine learning to analyze things like IP address consistency, digital fingerprints and the entire response pattern across a study, to detect organized fraud rings.
AI isn't just a buzzword, it's a shift in how we handle and trust data. Instead of reacting to bad data after the fact, AI helps us build quality from the very start. It's a move from a reactive approach to a proactive one, continuously monitoring and improving data.
Here's how AI is transforming data quality.
AI goes beyond simple data cleaning. It uses sophisticated algorithms to identify and prevent errors before they can impact your insights. Instead of just fixing bad data, it stops it before respondents even enter the survey.
The power of AI also lies in its ability to recognize complex patterns at scale. While a human would take days to manually review data, AI algorithms can instantly sift through millions of data points. They can spot subtle inconsistencies, bot behavior or even organized fraud attempts that would be nearly impossible for a person to find. This allows us to catch issues that were previously undetectable.
AI systems learn what normal data looks like. They continuously monitor incoming data and the moment a suspicious pattern or outlier emerges, it's flagged often in real-time.
And AI intelligently standardizes messy unstructured data. For example, it can recognize that NYC, Big Apple and New York, New York all refer to the same place ensuring free text fields are consistent and usable.
The outcome of all this is that we achieve higher confidence in every single insight we derive.
We get a massive boost in efficiency as insights professionals are freed from manual tedious data manipulation.
Ultimately, this gives your organization a critical strategic advantage. Your decisions are no longer based on assumptions or imperfect data, which leads to stronger product innovation, more effective marketing and better business outcomes.
Alright, let's shift gears and talk about something that I am personally very passionate about and this is a crucial yet often overlooked element of data quality. The human element, because let's be honest, no one loves filling out surveys and they can often feel like a chore.
But here's the thing, if people actually enjoy taking your survey, they're more likely to give you thoughtful, accurate answers and that leads to better, more reliable data.
So, how do we make our surveys feel less like a chore and more like a conversation? It all comes down to a few key design principles.
First, we must respect people's time. A survey that feels like it's never going to end is a recipe for bad data.
People get tired, lose focus, and start clicking through just to finish. We should always aim for our surveys to be 10-minutes or less, and within that time limit, we should give people a sense of progress.
Interject throughout the survey to let them know what the next section will cover and how close they are to finishing. That little bit of encouragement goes a long way.
Second, we have to design with a mobile first mindset.
The reality is most people are taking surveys on their phones. If your survey isn't optimized for a small screen, you're not just creating a bad user experience, you're risking your data quality questions that require a lot of scrolling or small buttons which can lead to respondents missing options or making mistakes which directly impacts the quality of your data.
Finally, we should always be looking for ways to add an elephant, an element of gamification.
Using different question types that go beyond the standard single select or multiple select things like sliders or even advanced methods make the survey feel less like a test and more like an interactive experience. You can also add visuals or short videos to break up the text.
By optimizing your surveys for the people taking them, you'll get higher completion rates, reduce bias from fatigue and ultimately gather more accurate and reliable data. It's a simple equation. Happy respondents equal better insights. So, put on your UX designer hat when you're writing these surveys and make your surveys fun.
Continuing with our discussion on survey design and data quality, let's talk about one of the simplest yet most effective tools for data quality. The open-ended question.
I know it might seem basic, but in today's landscape of fast data collection, it's more critical now than ever. We should be recommending that every survey we work on include at least one open-ended question. And there are two main reasons why.
First, open-ended questions are a fantastic, old school quality check. They're a simple way to spot low effort or fraudulent responses that have slipped past our more advanced screening methods.
Think about it. A bot or a truly inattentive respondent can often get through multiple choice questions without any issues, but it's much harder for them to provide a coherent, unique written response.
When you're reviewing the data, keep an eye out for a few telltale signs of AI generated content:
- Unlike most human answers, AI responses often have flawless grammar and spelling.
- The content is correct but lacks any personal details or specific experiences.
- The language can be overly formal and robotic.
- Sometimes the response simply restates the question without adding any new information.
- Lastly, be wary of answers that are excessively long and detailed.
If you see these signs, it's a good indicator that the response isn't from a real person and should be flagged.
Finally, and just as important, open-ended questions provide deeper insights.
Quantitative data is powerful, but it's limited. It can tell you what people think, but an open-ended question can tell you why. It can reveal new issues or opportunities that you never thought to ask about in your closed-ended questions. It adds a layer of rich qualitative depth to your numbers.
So, if a client or your own team comes to you with a survey that doesn't have an open-ended question, please recommend adding one.
Alright, now let's talk about quantilope’s approach to data protection.
quantilope ensures high-quality data through a three-phase approach that works before, during and after a survey.
Before a survey even begins, quantilope blocks bad participants. It uses a pre-survey defense module to analyze things like IP address and VPN usage, which helps to catch and block known fraudsters, bots and duplicate entrance.
We also work with top-tier panel providers who have their own strict quality checks in place.
While a survey is in progress, we use various attention checks to make sure that people are paying attention.
These include questions with simple instructions like adding together two numbers, asking if respondents use a made-up product or having them complete a task like identifying an object in an image.
Once the survey is complete, quantilope automatically flags and removes low-quality responses.
This is done through automated data cleaning flags that check for people who finish too quickly or too slowly.
We also have a straight lining flag that catches people who give the same answer to every question.
We also have two open-ended cleaners. The first open-end cleaner is used to clean out gibberish or nonsensical answers. While the copy/paste flag detects answers that were copied and pasted from sources like ChatGPT.
We also offer custom flags that allow you to create your own rules to identify and remove participants who display inconsistent answering behavior based on test questions or other criteria that you define.
At quantilope, we've always made confidentiality, integrity and data protection our top priorities. This commitment ensures our clients can be confident that their data is managed securely and that our research processes are of the highest quality.
To back this up, we've gone the extra mile to achieve two major certifications.
First, our ISO 27001 certification confirms our commitment to information security. This means we have a world-class information security management system that meets rigorous global standards giving you peace of mind that your data is always protected.
Second, our recent ISO 20252 certification focuses specifically on our research processes. This standard ensures that everything we do from data collection to reporting meets the highest quality standards, guaranteeing that our clients receive reliable and accurate insights.
Back to you, Gianna.
Gianna Saladino
Perfect.
Well, we've covered a lot of ground today and I want to quickly recap the key takeaways.
First and foremost, we've shown that data quality is a strategic asset. In a world where every company is a data company, the true competitive advantage isn't in having the most data, but in having data you can actually trust.
Next, we explored why advanced methods are so crucial. They allow us to move beyond what people say they'll do and uncover what they truly feel and value.
Methods like conjoint analysis force people to make real-world trade-offs. TURF analysis helps you define the most efficient mix of products to reach the widest audience. And implicit association testing bypasses conscious bias to reveal true subconscious brand perceptions. These methods are essential for reducing bias and gathering richer and more reliable insights.
And finally, we looked at how to futureproof your research in the world of AI.
With the rise of sophisticated bots and fraud, we need more than just simple data cleaning. We have to use AI powered tools that can proactively identify and block fraudulent behavior before it ever enters your data set.
By adhering to international standards like ISO 20252, you're ensuring your entire research process is high-quality, transparent and ethical.
Now, thank you all for your time today. We hope this presentation will inspire you to think about new ways to add data quality assurances to your research process.
With that, we would love to now open the floor up for any questions.