Designing Research for Better Data Quality
Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity.
As teams are being tasked with doing more with less, research is often over-designed, with too many objectives. This can lead to lengthy sessions, long questionnaires, too many different task types resulting in respondent fatigue and lower-quality data.
Smart research design ensures a better respondent experience, leading to better data quality and more actionable insights. That is Blueberry and other research organizations approach the design of a project with the respondent experience in mind.
Lisa McGurk, vice president at Blueberry discussed how to achieve great design in her session during the September 25, 2025, Quirk’s Virtual Sessions – Data Quality series.
Session transcript
Joe Rydholm
Hi everybody and welcome to our session, “Designing Research for Better Data Quality.”
I'm Quirk’s Editor, Joe Rydholm. Before we get started, let's quickly go over the ways you can participate in today's discussion.
You can use the chat tab to interact with other attendees, and you can use the Q&A tab to submit questions for the presenters during the session. We'll answer as many as we have time for during the Q&A portion.
Our session today is presented by Blueberry. Enjoy the presentation.
Lisa McGurk
Hello and welcome to our session.
I would like to start by saying thank you to the team at Quirk’s.
I'm Lisa McGurk, vice president at Blueberry, and we are happy to be here today to participate in the discussion on data quality.
Blueberry is a sensory and marketing research company, and we have a lot of deep experience conducting in-person and online research.
Our contribution to the conversation today is about how proper research design is the foundation for high quality, high value data. It is true that strong research design leads to better data collection, which leads to better data, which leads to stronger business decisions being made.
We all know, regardless of industry, teams are stressed with doing more with less and doing it faster than ever before. Whether it's a survey product, test, focus groups or online discussion board, the design is the foundation for great research.
So, why does research design matter?
Proper research design is the foundation of the project. It is the thread that links the business questions to consumers, which leads to the learnings that guide the insights that lead to business outcomes. Design is the heart of research.
So, without a strong design, you run the risk of collecting shallow or incomplete data that does not provide direction for the business. And in today's environment, that is a strong risk.
Strong data helps teams move forward with clarity. So, before we get into it, let's discuss what establishes good quality data.
When we say data quality, what do we actually mean?
While there are many factors that go into data quality, we at Blueberry see four key themes; accuracy, validity, completeness and reproducibility.
Accuracy means does the data actually reflect reality or the consumer or respondent voice? Is it being properly represented?
Validity means are you measuring what you intend to measure are? If you're trying to understand consumer acceptance of a product, are you asking appropriate questions around product liking among people who have actually tasted the product? That is key.
Then going back to completeness, data completeness, do you have enough data and are you able to answer the research question among the correct group of respondents?
Lastly, are you able to reproduce the data? Would another researcher be able to follow your methodology and get similar results?
When these considerations are not made in research design, you run the risk of not having accurate, reliable or valid data, and the consequences here result in making bad business decisions, potentially wasting resources, and unfortunately sometimes having a bad impact for the brand.
What are we going to talk through today?
As we discuss data quality as rooted in strong research design, we see four key areas.
First, we have to start with clear research questions and objectives, and then discuss sampling strategy, measurement and instrument design and lastly, collecting the data.
So, let's go into it a little bit deeper and get into clear research questions, which are the foundation for data quality and it's how we begin our approach in designing research.
Before research design can begin, it is important to determine what does the team need to know, and importantly why.
There are two key questions we like to ask teams before we embark on a research path with them. The two questions are:
- What decisions are being made with this data?
- What next steps are being taken in the process as a result of this research?
These are two powerful questions that really help to focus teams before we even start talking about methodology, timelines or budgets. The answers to these questions are vital because they ensure the success of the research conducted in the proper frame and leads to action for the business, which is what this is all about.
But what does this mean in practice?
What it means is if a team is in an early-stage foundational project phase, the questions and needs of the research are different than if a team is facing a competitive threat and they need to respond immediately.
It is also important to understand what the team can impact and what it can't impact.
What does this mean?
If we are conducting product research on a new formulation for a product and the packaging structure is set and will not change, it could create a lot of noise in the data.
If too many questions are focused on the packaging structure, it is important to determine at the beginning of a process what is need-to-know data and what is nice-to-know. Adding too many nice-to-know questions and secondary objectives to the research can often overshadow the need-to-know data, which can ultimately lead to respondent fatigue and confusion, which then impacts data quality and can unfortunately, at times, lead to inconclusive data.
All of this can impact the research outcome and set you back rather than moving you forward.
So, back to the beginning when we talked about teams being stressed and having a lot of objectives.
While sometimes there can be the temptation to add a lot to your research, sometimes less is more and being really clear and focused helps you align more importantly. So ultimately, being aligned on your research objectives helps you make sure you're getting to the decisions you need to make.
Coming to clear and concise research questions ultimately leans to better research design and better outcomes.
So, what is strong research design?
We've established that good research starts with a foundation. So, we see four key building blocks. We've already talked about clarity around the research question and objectives.
So, it is incredibly important to stay focused on what you actually need to know before adding objectives or additional questions. It is important to assess the overall benefits and trade-offs to adding more. So we'll get into each of these in a little bit more detail.
But the other important elements include sampling strategy, measurement and instrument design and then ultimately, the data collection protocol.
So, by sampling strategy, who do you need to answer your questions?
It's important to understand these considerations because it can ultimately impact the outcome of the research if you sample the wrong population.
Instrument and measurement design, this is really where the magic happens. It's determining the best way to answer your questions.
Is this a programmed survey or is it qualitative? Is your team looking for definitive answers or expansive answers? What method is going to give you the best learning?
And then once that's been determined and decided, how are we collecting the data to ensure the best outcomes?
So, are the tasks easy to follow? Are the instructions easy for respondents to understand? Will we be creating fatigue or confusion leading to low engagement?
All of these building blocks together are essential for a good design.
So, let's dive into sampling strategy a little bit more.
Who you study determines your conclusions and what audiences need to answer your research questions.
So, it's important to take a step back and discuss at the outset of your research, who do you need to talk to? Is Gen Pop appropriate or is it better to have concept accepters and category users depending on your research topic?
If you're looking for products for the whole family, do you need to talk to parents and their kids or is the parent the key stakeholder? When you're looking in the health and wellness space or personal care areas, perhaps there's someone who suffers an ailment and they have a caretaker that helps them. Is it important to hear from the sufferer and also the caretaker?
These are key considerations when it comes to designing who you need to sample in your research.
Once these higher-level considerations are met, then you need to determine what kinds of segments you need to look for.
Are we interested in discussing the objectives with brand users or loyalists, or is it more beneficial and better, easier to get to your research objectives by talking to occasional or lapsed users?
The business needs will help answer these questions and establish how you should be sampling your populations.
Any shifts in these considerations can dramatically change the conclusions and findings in your research. So, it is important to ensure you're sampling the correct audience and then you can determine who among that audience and how many.
Once you have your sampling frame determined, then you can start getting into some of the behavioral and psychographic criteria.
Are we looking for males and females? Are we looking for certain behaviors? And those criteria get determined.
Then once you have solidified that, then you can start to look at sample size and geography or other core demographics.
Now that we're grounded in the research objective and whom we're sampling, the real fun begins. It's creating the research tools and instruments for data collection, and this is where expertise and experience can really make the difference.
Qualitative and quantitative instruments have different designs, needs and considerations. I'm going to talk about both of them separately, but regardless, we do need to make sure we're creating a valid design, testing it and ensuring we have consistent results.
Let's dive in a little bit more with qualitative research.
Qualitative is meant to be more exploratory and expansive. This type of research is meant to get consumers comfortable to open up about a topic and to go really deep to help uncover those key nuggets that really impact teams. It's about observing and learning from consumers.
So, with this in mind, it is important to ensure the research tools and environment are structured to foster this phenomenon. Discussion guides, diaries, shopping exercises and story writing prompts are all great tools to uncover deep insights.
What can make the difference is how these are executed and how they are used in concert with one another to meet broader objectives.
While it is important to ensure the research objectives are being met when these tools are designed, the respondent experience should also be a key consideration, particularly for in-person.
Qualitative is meant to stimulate conversation and uncover insights. It is meant to go deep on topics and give respondents equal airtime. This is where being aligned on objectives is important.
It can be tempting to add too many topics or probes to a discussion guide, particularly if a team is under stress, pressure or has a lot riding on the research. Sometimes when teams do this, it's in the spirit of getting it all, but what can happen is the resulting data are thin and some of that richness is sacrificed when too much is added.
In our experience, a more focused discussion guide that allows for respondents to talk from their own perspectives and engage in conversations with each other leads to better data.
An overpacked guide makes the sessions feel rushed and inorganic. This can cause frustration for the respondents because they want to be heard but can't convey their points, and sometimes it leads to confusion shut down or worse sidebar conversations, all which impact data quality.
So, a question you may ask is, does this also play out for online qualitative research?
The short answer is yes, while online sessions may not be happening in real time or live, an overpacked discussion guide for online discussion boards or virtual focus groups can lead to fatigue, weak answers and high rates of dropout as well, all which have an overall impact on data quality.
So, focus is key here.
We're going to switch gears a little bit and now start to talk about quantitative data quality. We know that that in quantitative research, the questionnaire and the survey can be based either in person or online.
So, regardless if respondents are answering a survey from home or coming into a central location to complete a product test, survey and questionnaire design is key to getting high quality data.
Let's talk first about an online survey.
Of course, questionnaire length is important. If the questionnaire is too long or includes too many screens or prompts, respondents may get frustrated and abandon the survey.
If questions are repetitive or use too many questions to ask about a similar theme, respondents may get confused.
So, it is important to ensure the questionnaire length is engaging, logical and realistic.
And when we say things like engaging, logical and realistic, we mean are we asking respondents to recall behavior from over a year ago? While that may be important to know, are they actually able to recall that behavior with accuracy?
The answer is maybe if it's something like shopping for insurance and it's more of a one-time larger purchase or occasion, but not likely if it's about something like a specific brand of popcorn and something that they purchase on a more frequent basis. It really depends on the behavior and its magnitude in a respondent's life.
So, questions like these will get answered, but the true question is, how reliable are those data points? And so, these considerations need to be taken into account for in-person quantitative research as well.
At Blueberry, we do a lot of product testing and central location tests. In these examples, when consumers are coming in to complete a taste test, we need to really think through how to structure the questionnaire appropriately.
Let's think about a scenario where a brand's team has a new concept and the respondents will be coming into the facility to taste four new samples. The test needs to be set up with the respondent experience in mind.
Let's say we're testing pancakes or coffee foods that are typically consumed in the morning. It makes sense to test these products earlier in the day rather than later in the day.
So, in staying with the theme that the questionnaire should be engaging, realistic and logical, let's say in the example of pancakes, the brand team is interested in understanding overall value proposition and perceptions of the pancakes.
We can certainly ask a question on a five-point scale about whether or not the sample represented a good value, but if we haven't provided any pricing or pack size information, the answer to that question may not be accurate. It's really important to provide the proper context when asking specific questions.
Another example in product research in particular is oftentimes preference questions get asked, which is a wonderful way to learn about products. However, it can be difficult to ask preference questions if a lot of time has elapsed between sample evaluations, particularly if there are four or more samples in a product set.
This type of recall is difficult and may lead to inconclusive data. While these questions can be asked, it is important to determine if respondents can accurately answer them and if the resulting data is valid.
To ensure that your measurement tools are spot on, it's really important to consider all of these factors and characteristics and to have some fun and inject creativity into your tool design.
Now that we've designed the best research tools and instruments to answer our business questions, it's important to start to collect the data.
So, even with the alignment of objectives, the right sampling frame and solid measurement tools, data collection still is key. Having proper protocols in place and an acute strong QA process are essential to ensure the data that you are collecting is of high quality.
So, when we're working through in-person research methodologies, ensuring that any execution staff that is part of the research is very well trained and is following standardized methods and protocols consistently. This may mean training staff, scheduling kickoff calls and really walking everyone through the project and the standards.
This applies to both qualitative and quantitative research methodologies. It's important for the execution staff to ensure that the protocol is being followed, all questions are answered and completion rates are met. This is ensuring that we don't have any missing data or that we're injecting bias in our data collection process.
Similarly, for online research, the teams prior to launch need to ensure the onscreen instructions and prompts make sense for the end user. We need to ensure the programming team knows how to interpret the instructions appropriately, so the onscreen presentation is correct.
It's important to do pre-tests, time discussion guides and ensure all questionnaire, login and skip patterns, work appropriately. All of these safeguards are necessary and are the final step to ensuring great data quality.
So, what are the key takeaways?
A good design equals good data, but it always begins with a clear research question. Aligning on this as a team is incredibly important. Once this alignment is met, it's very important to follow through with careful design, sampling, measurement, protocols and ethics.
In closing, a strong research design leads to accurate insights and smarter business decisions.
And that is what we have for you today. I have a few members of the team standing by ready to answer and participate in the Q&A.