Sponsored content 

Editor's note: Julia Eisenberg is vice president, insights at iModerate, a division of 20|20 Research.  

Do you ever grocery shop without a list? A list-free trip often seems easy and even liberating. It can feel like a waste of time and brainpower to write down everything you need. That was my point of view until recently. It wasn’t until a fed-up family member demanded I start making grocery lists that I realized how much more I was spending and how futile my unplanned aspirational purchases were. Dragon fruit, really? Two pounds of chia seeds? Nice try! Not only was I swimming in unused kumquats but I was so excited about my exotic purchases that I was forgetting many of the staples I truly needed. I was complicating my own life. Why? Because I wasn’t taking time to plan.

Whether it’s grocery shopping or designing consumer research, more can feel like more. But I’ve found that more is usually the enemy of efficiency and clarity. The research we design has higher stakes and a bigger budget than a weekly shopping trip, so we must prioritize intentional purchases fueled by clearly articulated objectives. Without planning and forethought, we’re left with overripe piles of unused and unnecessary data. We can do better – we must. It all starts with understanding the value of saying no.

As an industry, where do we typically make the wrong turns that lead us into the trap of saying yes to too much? In the past, fielding a study was a manual, time-consuming process with milestones measured in weeks or even months. A study 20 years ago felt more like a long journey than a quick trip, so researchers wisely packed these studies full of objectives because we had the luxury of time on our side. However, the world of research has changed, largely due to innovations in digital methods and approaches, opening a new paradigm of fast, iterative studies.

Trouble breaking habits

We’re still having trouble breaking the habits of the past and learning to adapt to this new, leaner research world. We analyzed 495 responses to the question in the Quirk’s survey of what defines poor-quality results. Using a combination of text analytics and human smarts to make sense of the data, we identified a few key themes. First and foremost, we heard loud and clear that issues with respondents and sample quality are constant offenders. It’s no secret these two topics can be touchy and we could write volumes on them alone. But since this article requires brevity, I’ll put these offending topics to the side. Beyond sample and respondent quality, we saw some compelling concepts dominate the conversation. Saying yes too much tends to deliver bloated studies that fall short in three common areas: poorly-defined objectives; obvious errors; and meaningless results. (figure 1)

Poorly-defined objectives. Bad data has some obvious origins – poorly-written surveys, biased (or unclear) questions and incorrect survey logic or probes. While we may question methodologies in hindsight, overwhelmingly we heard that poorly-defined objectives are to blame for substandard results and the worthless conclusions to which they lead. To correct this, it’s critical to get to the root of how and why objectives are poorly-defined. The list is long but three research sins tend to be the main culprits.

  • Trying to fit 10 pounds of objectives into a 5-pound sack. Cramming many goals into one study overcrowds it and dilutes its meaning and purpose. While there is no such thing as an absolute right number of objectives, a good rule of thumb is to pay attention to prime numbers. Two or three objectives usually translate to a nicely-focused approach. Five should throw a red flag – is each objective truly necessary? Anything beyond five should be evaluated and split up into separate endeavors.
  • Apathy. Feeling overworked and overwhelmed with a boring study (we’ve all had them!) can lead to design apathy. It seems good enough, so we approve the objectives and move on. We say yes when we should say, “No, it’s not there yet.” It can be painful to dedicate energy to the design phase but it is critical in making the exercise worth your time and money. To combat this, consider reverse-engineering the design. Start by defining what a successful outcome will look like, then craft your objectives to produce that outcome.
  • Tagalong objectives. These add-ons can really throw a study into a tailspin. They seem harmless and small – just one or two extra initiatives slipped in at the last minute to please a random outside request. Again, this is the time to say no. Tagalong objectives draw focus and end up getting more air time than the main objectives the study was meant to focus on in the first place.

Obvious errors. There is no worse feeling than spotting an error within minutes of receiving a final report or clean data. It instantly zaps confidence in all the results. Sadly, there is an often missed yet critical step that reduces the risk of errors in end results: quality control. Many don’t take the time to inquire about and pressure-test the quality-control process. This is as important as verifying that study objectives are universally understood. Clients should feel as comfortable in their partners’ quality-control process as they do with every other aspect of the research methodology.

Meaningless results. For anyone who depends on research to drive growth inside their business, meaningless results add insult to injury. There is nothing more frustrating than a study that provides nothing new, makes no logical sense and doesn’t answer any questions. Expecting correct results that add value to a brand’s direction and bottom line seems like a simple wish. What often goes wrong is clients and vendors engage in a courtesy showdown in the sunny, early stages of research. With all the promise of a new study stretching out ahead of us, we may neglect to ask the tough but necessary questions. What happens if the results are worthless? What if our results are too good to be true? We must be respectful and direct – and conscientious and clear about our expectations.

Not the quality you’d hoped for

What do you do when the results of your research are not the quality you’d hoped for? (figure 2) The Quirk’s survey provided 475 detailed, animated responses to this question and it’s clear most have experience with this tough situation. It happens. It’s awful but we’ve all found ourselves wishing we’d been firmer and said no sooner. We looked at the responses using a combination of machine analytics and human analysis and found that when faced with this challenge, there are a few key things we fall back on in order to move forward:

  • Ask for a do-over. Ask the vendor/supplier to rework the issues, to fix mistakes and blatant problems. Many will give partners a shot at this but few plan to do business with them again in the future.
  • Supplement. When out of time and additional budget, respondents say they will use secondary or past research to augment the bad research. Some may add qualitative research to help the situation or just vow to not use the same methodology again.
  • Replace bad sample. Rather than start from scratch, many will first ask providers to clean or eliminate inappropriate sample and/or send out a new (and more representative) sample, replacing poor respondents with quality ones.
  • Call it directional and salvage what you can. Without time to re-run a study, researchers will often try to turn lemons into lemonade to avoid a total loss. They will take whatever insights are appropriate and try to glean something from the research. Many will add caveats to their findings, treating the research as directional instead of quantitative, statistically significant or representative.

As with setting objectives and planning a project, open, direct communication and a dialogue focused on solutions gives the best chance of salvaging poor-quality results.

Clearer, simpler and more actionable

When we as researchers are disciplined in our project design and don’t throw everything and the kitchen sink into a study, the results we receive are clearer, simpler and more actionable. We should never have to waste energy on disappointing results. Adding even a dash of rigor and discipline to one’s process and infusing it into how we hold vendors and suppliers accountable can make a world of difference. To summarize:

  • Reduce. Take a red pen to excessive objectives. Two to three should always be the goal – be critical of anything more.
  • Care. Hold yourself, your team and your partners accountable to giving a hoot about the purpose of your research. Great work never comes from apathetic design.
  • Protect. Guard the integrity of your work and don’t let others add unrelated objectives that could draw focus from your main purpose.
  • Quality-control. Value the quality-control process as much as the objectives and methodology. Vet it early and often.
  • Be honest. Never be shy to ask the tough questions up front – better to proactively discuss issues (and plan to avoid them) than to be so courteous you’re left with a big pile of useless results.

Adding these layers of clarity and accountability to your research will help avoid and eliminate the pitfalls of poor-quality design. “No” can be empowering and transformational when used to plan with intention. If you need me, you’ll find me at the grocery store trying to say no to a bushel of quince.