Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

The consequences of poorly-designed surveys



Article ID:
20140625-1
Published:
June 2014
Author:
Darrin Helsel

Article Abstract

Darrin Helsel shows how to make sure your surveys are asking the right questions in the right ways.

Editor’s note: Darrin Helsel is the director of quantitative programs at Vivisum Partners, a Durham, N.C., research firm. He is also co-founder and principal for Distill Research, Portland, Ore. This article is an edited version of a blog post that originally appeared on the Vivisum blog page under the title “Junk in, junk out: the consequences of poorly-designed survey instruments.”

Poorly-designed survey instruments will yield less-than-reliable data. There. I said it. DIY platforms are springing up all over, allowing any Tom, Dick or Harry to throw a survey to the ether to collect data to inform their business decisions. What a thrill it is to collect this data for pennies per respondent! However, unless your questionnaire is designed well (which I’ll explain in a moment), the data you collect could be next to useless. Or worse, it could be just plain wrong.

We all follow our own nature and, for many, the occupation represents what’s in one’s own nature to do:

• For a marketer, their job is to educate and inform the customer or prospect of their company’s value proposition. Hence, when commissioning or conducting research, it’s in their nature to successfully position the value proposition of the product or service they’re marketing, regardless of the goal of the research.

• For product designers, it’s in their nature to create based on the input that informs their inner muse. So when commissioning or conducting research, it’s in their nature to collect data that supports their own muse, regardless of the goal of the research.

• For product managers, it’s in their nature to shepherd their products to market, managing costs and processes to get them to market as efficiently as possible. So when commissioning or conducting research, it’s in their nature to minimize impediments to their process, regardless of the goal of the research.

Market research, by comparison, is guided by the scientific method. It’s in a researcher’s nature to ask questions in a detail-oriented, scientific fashion. As we know from middle-school science class, the scientific method is a system by which curiosity is organized through experimentation to disprove a null hypothesis. In so doing, the researcher follows a methodology to ensure that the experiment is repeatable with the same subjects and reproducible with a new set of subjects.

Repeatable. The case wherein if the same subject is asked the same question six, 24 or 48 hours later, the answer will be the same.

Reproducible. The case wherein the same survey instrument asked of a different population, though with the same sample parameters, provides the same proportion of responses.

Hence it’s in the researcher’s nature to ask questions and record answers using a methodology to ensure valid data – data that’s not overly pedagogical for the marketer; data that represents what the market thinks of a product’s design, regardless of whether it fits with the product designer’s musings; and data that may disrupt the processes of the product manager. It’s representative of a given market or audience; it’s unbiased and objective; and it is repeatable and reproducible to demonstrate its validity.

To ensure these qualities in the data, researchers put great emphasis in the questionnaire design. Why? We have a saying: junk in, junk out. Without a quality design that follows best practices, we can’t ensure the quality of the data on the back end of the study. Here are five (of the many) best practices we follow when designing questionnaires:

Don’t confuse your respondents. This seems like a no-brainer but you’d be surprised at how many non-researchers do this effortlessly. For instance, an easy way to confuse respondents is by forcing them to pick a single response when more than one response describes them or their experience.

It’s called cognitive dissonance, coined by Leon Festinger in the 1950s in the field of social psychology, wherein a person experiences the mental stress and discomfort experienced when they hold two or more contradictory beliefs, ideas and/or values at the same time. In the area of survey science, two outcomes can be the result of cognitive dissonance: 1) respondents get frustrated and quit the survey, lowering your response rate and risking unmeasured bias of your results; or, worse 2) they get frustrated and angry and populate your survey with bogus answers. Hence, great care is required to create response lists that are mutually exclusive and represent options that describe the experiences of 80 percent of your respondents. The other 20 percent is typically reserved for “Other, specify” write-in responses.

Know what you’re measuring. Like muddling response lists, knowing what you’re measuring also entails avoiding double-barreled questions. When you’re asking a question that incorporates two or more phenomena to be measured, which one is represented by their response? A good rule of thumb is 1:1 – one question, one metric.

Ground behavioral questions in a distinct space of time. Prior to the emergence of big data, which measures behaviors within a given sphere (credit card transactions, phone calls, interactions with health care professionals, etc.), much of our behaviors required asking questions in a survey. The pitfall of this can be our notoriously faulty memories. Commenters in numerous fields have pontificated on the personalization of memory: as soon as we see or do something, that action gets interpreted by our brains and it’s this interpretation that makes up our memory – not the action itself. This process is particularly noticeable in actions that extend further and further away in time. Hence when asking about a behavior, it helps to ground the question in a time frame that’s as immediate as possible, while balancing the probability that they’ve done enough of those behaviors in that time frame to collect useful data. For small behaviors, a day, a few days or a week may be a suitable amount of time. For bigger behaviors, one, three or six months may be more appropriate. Avoid asking about “average” behaviors like you’d avoid a zombie apocalypse.

Ask questions that your respondents can answer. By that I mean, if they’ve indicated they’ve never used a product, don’t follow up with a question about their satisfaction with said product. Most, if not all, Internet survey platforms come with the capability of filtering. Filter out respondents who shouldn’t be asked a question given their previous responses. You’ll minimize frustration and maximize the validity of the data you collect as a result.

Seek opportunities NOT to bias your respondents. Biases, both measured and unmeasured, can be the bane of your survey data. One source of bias that’s easily accounted for and rectified can be found in the way you phrase your questions. Rating questions, for instance, are easily susceptible in being asked in a biased way. As a rule of thumb, for instance, always mention both ends of the scale in the way you phrase the question so that, even unconsciously, you permit the respondent to consider both sides. By only mentioning one side, it’s almost as if you control their eyes: they immediately seek the side of the scale you mentioned and select their preferred answer.

Just as each occupation follows from each person’s nature, it’s also part of our shared DNA that we respond positively to content that resonates with us. That is, we seek to understand the world in our own image or experience. When presented with a question, we seek to find our own answer in that question. It’s how we have survived these millennia – by finding a common language by which to create community. We learned early on that there’s power in numbers. These best practices will help you collect the repeatable and reproducible numbers you need to make the decisions you have to make.

 

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

THE RESEARCH CLUB NETWORKING EVENT - LONDON
October 2, 2014
The Research Club will host a networking event on October 2 at Tiger Tiger's, London.
RIVA COURSE 201: FUNDAMENTALS OF MODERATING
October 7-9, 2014
RIVA Training Institute will hold a course, themed 'Fundamentals of Moderating,' on October 7-9 in Rockville, Md.

View more Related Events...

Related Articles

There are 1689 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

Testing product innovations: a case history
Without disclosing the exact nature of the innovation to respondents, General Electric used multiple-part independent research to determine the viability of product innovation.
Florida travel habits subject of phone survey
Tourism in Florida is big business, but Floridians tend to get left out of the picture. A telephone survey helped the tourism industry to better understand the travel habits of Florida residents, to improve in-state tourism.
Omnibus study looks at senior lifestyles
Mature Marketing and Research used an omnibus study to investigate the attitudes and priorities of consumers 50 and older, a segment of the population moving into the spotlight as Baby Boomers hit the "big five-o."
Six questions to ask your supplier about multivariate analysis
The author presents six questions that consumers should ask suppliers of multivariate analysis. Issues addressed include the cleaning and handling of data, the program and process for analyzing the data, the presentation of the final project, and the potential for repeat analyses.
By the Numbers: The top five mistakes in marketing statistics
From asking too many questions to falling for the latest technique, here is a statistician’s take on marketers’ common statistics-related mistakes.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

Referencing another survey to provide context on a question
09/12/2014 by Karina Santoro
request
06/06/2014 by Monika Kunkowska
TURF excel-based simulator
04/17/2014 by Giovanni Olivieri
XLSTAT Turf
04/10/2014 by Felix Schaefer
TURF excel-based simulator
03/25/2014 by Werner Mueller

View More