Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

The consequences of poorly-designed surveys



Article ID:
20140625-1
Published:
June 2014
Author:
Darrin Helsel

Article Abstract

Darrin Helsel shows how to make sure your surveys are asking the right questions in the right ways.

Editor’s note: Darrin Helsel is the director of quantitative programs at Vivisum Partners, a Durham, N.C., research firm. He is also co-founder and principal for Distill Research, Portland, Ore. This article is an edited version of a blog post that originally appeared on the Vivisum blog page under the title “Junk in, junk out: the consequences of poorly-designed survey instruments.”

Poorly-designed survey instruments will yield less-than-reliable data. There. I said it. DIY platforms are springing up all over, allowing any Tom, Dick or Harry to throw a survey to the ether to collect data to inform their business decisions. What a thrill it is to collect this data for pennies per respondent! However, unless your questionnaire is designed well (which I’ll explain in a moment), the data you collect could be next to useless. Or worse, it could be just plain wrong.

We all follow our own nature and, for many, the occupation represents what’s in one’s own nature to do:

• For a marketer, their job is to educate and inform the customer or prospect of their company’s value proposition. Hence, when commissioning or conducting research, it’s in their nature to successfully position the value proposition of the product or service they’re marketing, regardless of the goal of the research.

• For product designers, it’s in their nature to create based on the input that informs their inner muse. So when commissioning or conducting research, it’s in their nature to collect data that supports their own muse, regardless of the goal of the research.

• For product managers, it’s in their nature to shepherd their products to market, managing costs and processes to get them to market as efficiently as possible. So when commissioning or conducting research, it’s in their nature to minimize impediments to their process, regardless of the goal of the research.

Market research, by comparison, is guided by the scientific method. It’s in a researcher’s nature to ask questions in a detail-oriented, scientific fashion. As we know from middle-school science class, the scientific method is a system by which curiosity is organized through experimentation to disprove a null hypothesis. In so doing, the researcher follows a methodology to ensure that the experiment is repeatable with the same subjects and reproducible with a new set of subjects.

Repeatable. The case wherein if the same subject is asked the same question six, 24 or 48 hours later, the answer will be the same.

Reproducible. The case wherein the same survey instrument asked of a different population, though with the same sample parameters, provides the same proportion of responses.

Hence it’s in the researcher’s nature to ask questions and record answers using a methodology to ensure valid data – data that’s not overly pedagogical for the marketer; data that represents what the market thinks of a product’s design, regardless of whether it fits with the product designer’s musings; and data that may disrupt the processes of the product manager. It’s representative of a given market or audience; it’s unbiased and objective; and it is repeatable and reproducible to demonstrate its validity.

To ensure these qualities in the data, researchers put great emphasis in the questionnaire design. Why? We have a saying: junk in, junk out. Without a quality design that follows best practices, we can’t ensure the quality of the data on the back end of the study. Here are five (of the many) best practices we follow when designing questionnaires:

Don’t confuse your respondents. This seems like a no-brainer but you’d be surprised at how many non-researchers do this effortlessly. For instance, an easy way to confuse respondents is by forcing them to pick a single response when more than one response describes them or their experience.

It’s called cognitive dissonance, coined by Leon Festinger in the 1950s in the field of social psychology, wherein a person experiences the mental stress and discomfort experienced when they hold two or more contradictory beliefs, ideas and/or values at the same time. In the area of survey science, two outcomes can be the result of cognitive dissonance: 1) respondents get frustrated and quit the survey, lowering your response rate and risking unmeasured bias of your results; or, worse 2) they get frustrated and angry and populate your survey with bogus answers. Hence, great care is required to create response lists that are mutually exclusive and represent options that describe the experiences of 80 percent of your respondents. The other 20 percent is typically reserved for “Other, specify” write-in responses.

Know what you’re measuring. Like muddling response lists, knowing what you’re measuring also entails avoiding double-barreled questions. When you’re asking a question that incorporates two or more phenomena to be measured, which one is represented by their response? A good rule of thumb is 1:1 – one question, one metric.

Ground behavioral questions in a distinct space of time. Prior to the emergence of big data, which measures behaviors within a given sphere (credit card transactions, phone calls, interactions with health care professionals, etc.), much of our behaviors required asking questions in a survey. The pitfall of this can be our notoriously faulty memories. Commenters in numerous fields have pontificated on the personalization of memory: as soon as we see or do something, that action gets interpreted by our brains and it’s this interpretation that makes up our memory – not the action itself. This process is particularly noticeable in actions that extend further and further away in time. Hence when asking about a behavior, it helps to ground the question in a time frame that’s as immediate as possible, while balancing the probability that they’ve done enough of those behaviors in that time frame to collect useful data. For small behaviors, a day, a few days or a week may be a suitable amount of time. For bigger behaviors, one, three or six months may be more appropriate. Avoid asking about “average” behaviors like you’d avoid a zombie apocalypse.

Ask questions that your respondents can answer. By that I mean, if they’ve indicated they’ve never used a product, don’t follow up with a question about their satisfaction with said product. Most, if not all, Internet survey platforms come with the capability of filtering. Filter out respondents who shouldn’t be asked a question given their previous responses. You’ll minimize frustration and maximize the validity of the data you collect as a result.

Seek opportunities NOT to bias your respondents. Biases, both measured and unmeasured, can be the bane of your survey data. One source of bias that’s easily accounted for and rectified can be found in the way you phrase your questions. Rating questions, for instance, are easily susceptible in being asked in a biased way. As a rule of thumb, for instance, always mention both ends of the scale in the way you phrase the question so that, even unconsciously, you permit the respondent to consider both sides. By only mentioning one side, it’s almost as if you control their eyes: they immediately seek the side of the scale you mentioned and select their preferred answer.

Just as each occupation follows from each person’s nature, it’s also part of our shared DNA that we respond positively to content that resonates with us. That is, we seek to understand the world in our own image or experience. When presented with a question, we seek to find our own answer in that question. It’s how we have survived these millennia – by finding a common language by which to create community. We learned early on that there’s power in numbers. These best practices will help you collect the repeatable and reproducible numbers you need to make the decisions you have to make.

 

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

RIVA COURSE 201: FUNDAMENTALS OF MODERATING
February 11-13, 2015
RIVA Training Institute will hold a course, themed 'Fundamentals of Moderating,' on February 11-13 in Rockville, Md.
THE RESEARCH CLUB NETWORKING EVENT - LONDON
February 12, 2015
The Research Club will host a networking event on February 12 in London. The venue is to be determined.

View more Related Events...

Related Articles

There are 1713 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

Satisfaction measurement: Is it worth it?
There are many approaches used for satisfaction monitoring and just as many scales for measuring and reporting the results. This article investigates common misconceptions about satisfaction measurement and the assumptions inherent in many satisfaction studies.
Ariba’s satisfaction research program has made listening to the customer a company-wide habit
Customer satisfaction research is now ingrained in the corporate culture of spend management firm Ariba. How it got there and how it is implemented is the subject of this article.
Issues in international focus groups, with a special view from Japan
International focus groups differ from those in the United States and require special awareness and adaptations to conduct them. This article discusses issues in international focus groups, highlighting Japan.
Are you collecting too much information in your 'voice of the customer' process?
Building on the January 1999 article “’Voice of the Customer’ Disconnects Still Exist in Most Companies,’” this article addresses fundamental shortcomings in the design of the VOC process.
How marketing research can help the restaurant industry get through the recession
Three researchers specializing in the restaurant field offer their thoughts on how food purveyors can use research to keep customers coming through their doors. Their advice: focus on value, appeal to consumers' emotions and make sure your staff is upholding your brand's promise. Mystery shopping can help with the latter effort, while other forms of research can help determine how target market segments define value and how to trigger emotional responses to dining out.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

TURF excel-based simulator
12/16/2014 by Joseph O. Fayese
Hi Giovanni
10/17/2014 by Dohyun Kim
Referencing another survey to provide context on a question
09/12/2014 by Karina Santoro
request
06/06/2014 by Monika Kunkowska
TURF excel-based simulator
04/17/2014 by Giovanni Olivieri

View More