Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

The consequences of poorly-designed surveys

Article ID:
June 2014
Darrin Helsel

Article Abstract

Darrin Helsel shows how to make sure your surveys are asking the right questions in the right ways.

Editor’s note: Darrin Helsel is the director of quantitative programs at Vivisum Partners, a Durham, N.C., research firm. He is also co-founder and principal for Distill Research, Portland, Ore. This article is an edited version of a blog post that originally appeared on the Vivisum blog page under the title “Junk in, junk out: the consequences of poorly-designed survey instruments.”

Poorly-designed survey instruments will yield less-than-reliable data. There. I said it. DIY platforms are springing up all over, allowing any Tom, Dick or Harry to throw a survey to the ether to collect data to inform their business decisions. What a thrill it is to collect this data for pennies per respondent! However, unless your questionnaire is designed well (which I’ll explain in a moment), the data you collect could be next to useless. Or worse, it could be just plain wrong.

We all follow our own nature and, for many, the occupation represents what’s in one’s own nature to do:

• For a marketer, their job is to educate and inform the customer or prospect of their company’s value proposition. Hence, when commissioning or conducting research, it’s in their nature to successfully position the value proposition of the product or service they’re marketing, regardless of the goal of the research.

• For product designers, it’s in their nature to create based on the input that informs their inner muse. So when commissioning or conducting research, it’s in their nature to collect data that supports their own muse, regardless of the goal of the research.

• For product managers, it’s in their nature to shepherd their products to market, managing costs and processes to get them to market as efficiently as possible. So when commissioning or conducting research, it’s in their nature to minimize impediments to their process, regardless of the goal of the research.

Market research, by comparison, is guided by the scientific method. It’s in a researcher’s nature to ask questions in a detail-oriented, scientific fashion. As we know from middle-school science class, the scientific method is a system by which curiosity is organized through experimentation to disprove a null hypothesis. In so doing, the researcher follows a methodology to ensure that the experiment is repeatable with the same subjects and reproducible with a new set of subjects.

Repeatable. The case wherein if the same subject is asked the same question six, 24 or 48 hours later, the answer will be the same.

Reproducible. The case wherein the same survey instrument asked of a different population, though with the same sample parameters, provides the same proportion of responses.

Hence it’s in the researcher’s nature to ask questions and record answers using a methodology to ensure valid data – data that’s not overly pedagogical for the marketer; data that represents what the market thinks of a product’s design, regardless of whether it fits with the product designer’s musings; and data that may disrupt the processes of the product manager. It’s representative of a given market or audience; it’s unbiased and objective; and it is repeatable and reproducible to demonstrate its validity.

To ensure these qualities in the data, researchers put great emphasis in the questionnaire design. Why? We have a saying: junk in, junk out. Without a quality design that follows best practices, we can’t ensure the quality of the data on the back end of the study. Here are five (of the many) best practices we follow when designing questionnaires:

Don’t confuse your respondents. This seems like a no-brainer but you’d be surprised at how many non-researchers do this effortlessly. For instance, an easy way to confuse respondents is by forcing them to pick a single response when more than one response describes them or their experience.

It’s called cognitive dissonance, coined by Leon Festinger in the 1950s in the field of social psychology, wherein a person experiences the mental stress and discomfort experienced when they hold two or more contradictory beliefs, ideas and/or values at the same time. In the area of survey science, two outcomes can be the result of cognitive dissonance: 1) respondents get frustrated and quit the survey, lowering your response rate and risking unmeasured bias of your results; or, worse 2) they get frustrated and angry and populate your survey with bogus answers. Hence, great care is required to create response lists that are mutually exclusive and represent options that describe the experiences of 80 percent of your respondents. The other 20 percent is typically reserved for “Other, specify” write-in responses.

Know what you’re measuring. Like muddling response lists, knowing what you’re measuring also entails avoiding double-barreled questions. When you’re asking a question that incorporates two or more phenomena to be measured, which one is represented by their response? A good rule of thumb is 1:1 – one question, one metric.

Ground behavioral questions in a distinct space of time. Prior to the emergence of big data, which measures behaviors within a given sphere (credit card transactions, phone calls, interactions with health care professionals, etc.), much of our behaviors required asking questions in a survey. The pitfall of this can be our notoriously faulty memories. Commenters in numerous fields have pontificated on the personalization of memory: as soon as we see or do something, that action gets interpreted by our brains and it’s this interpretation that makes up our memory – not the action itself. This process is particularly noticeable in actions that extend further and further away in time. Hence when asking about a behavior, it helps to ground the question in a time frame that’s as immediate as possible, while balancing the probability that they’ve done enough of those behaviors in that time frame to collect useful data. For small behaviors, a day, a few days or a week may be a suitable amount of time. For bigger behaviors, one, three or six months may be more appropriate. Avoid asking about “average” behaviors like you’d avoid a zombie apocalypse.

Ask questions that your respondents can answer. By that I mean, if they’ve indicated they’ve never used a product, don’t follow up with a question about their satisfaction with said product. Most, if not all, Internet survey platforms come with the capability of filtering. Filter out respondents who shouldn’t be asked a question given their previous responses. You’ll minimize frustration and maximize the validity of the data you collect as a result.

Seek opportunities NOT to bias your respondents. Biases, both measured and unmeasured, can be the bane of your survey data. One source of bias that’s easily accounted for and rectified can be found in the way you phrase your questions. Rating questions, for instance, are easily susceptible in being asked in a biased way. As a rule of thumb, for instance, always mention both ends of the scale in the way you phrase the question so that, even unconsciously, you permit the respondent to consider both sides. By only mentioning one side, it’s almost as if you control their eyes: they immediately seek the side of the scale you mentioned and select their preferred answer.

Just as each occupation follows from each person’s nature, it’s also part of our shared DNA that we respond positively to content that resonates with us. That is, we seek to understand the world in our own image or experience. When presented with a question, we seek to find our own answer in that question. It’s how we have survived these millennia – by finding a common language by which to create community. We learned early on that there’s power in numbers. These best practices will help you collect the repeatable and reproducible numbers you need to make the decisions you have to make.


Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

August 8-10, 2016
RIVA Training Institute will hold a course, themed 'Fundamentals of Moderating,' on August 8-10 in Rockville, Md.
August 22-24, 2016
RIVA Training Institute will hold a course, themed 'Fundamentals of Moderating,' on August 22-24 in Rockville, Md.

View more Related Events...

Related Articles

There are 1877 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

General Mills marketing research decides cookbook cover
"Betty Crocker's Cookbook" has sold over 22 million copies, but as the flagship of their publishing line, General Mills Marketing experts needed to figure out a cover that could keep the book selling strong. A variety of techniques were used to figure out what book cover would sell best.
JCPenney pinpoints its customers
In order to fully understand the needs of their customers, JCPenney has initiated a series of studies called Consumer Feedback. These studies give JCPenney a clear picture of the needs, attitudes and behaviors of their customers.
Rating scales can influence results
A summarized excerpt of a U.S. Department of Commerce study testing the merits of a seven-point rating scale versus a 10-point rating scale.
Quest research pays off for United Way
In the past, marketing research was too expensive for many United Way organizations. But all that has changed, thanks to a new research program called Quest. By utilizing innovative survey techniques and technology, Quest allows United Way organizations to improve communications, identify key services and improve fundraising easily and inexpensively.
Singles' lifestyles explored in JCPenney study
A recent survey by JCPenney explored the lifestyles and tendencies of the singles population. The consumer study, conducted by the Public Issues and Consumer Programs department of the JCPenney Co., helped the retail giant to better understand the approximately 77 million singles living in the United States.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries


Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

Confidence interval Definition
03/03/2016 by Alex Hales
TURF Simulator with Shapley Value
02/10/2016 by Amit Zaveri
Dyad/Triad Interviews
10/29/2015 by Alex Hales
Compared to what?
09/24/2015 by Alex Hales
Univariate Analysis
09/24/2015 by Alex Hales

View More