A joint effort

Editor's note: Ron Sellers is president of Grey Matter Research. He can be reached at ron@greymatterresearch.com.


Do you want to rely on a study where nearly half the respondents aren’t giving you valid answers? Without the right steps, that’s what you could be getting from your online panel research.

“Whoa – nearly half? That’s a pretty wild claim,” is probably your first thought, followed closely by, “I know online panel has some bad respondents, but we use CAPTCHA and eliminate speeders, so we should be good.” 

If only it were that simple…

hands holding puzzle pieces

Grey Matter Research (a consumer insights consultancy) and Harmon Research (a field agency) joined forces to determine just how many disengaged, fraudulent or otherwise bogus respondents are in a typical online panel study. The result is our report Still More Dirty Little Secrets of Online Panels (e-mail me for a copy). We tried to make the cover really attractive, because the content is pretty ugly for anyone who uses online panel research.

Don’t get the notion this is an anti-panel screed or a promotion of some “proprietary” methodology over online panels. Every insights methodology has challenges and quality issues. Good researchers recognize this and do everything they can to reduce the problems and mitigate the challenges without throwing out the methodology. But from our observation, nowhere near enough researchers are taking the necessary steps to address the problems in online panel studies.

We fielded a questionnaire of about 10 minutes, with age and gender quotas, through five of the 10 largest U.S. opt-in consumer research panels. Nearly 2,000 panel members attempted to complete the questionnaire.

We used a variety of techniques to identify respondents who were giving poor-quality or fraudulent answers, such as: asking the same demographic question at the beginning and end of the questionnaire to compare responses; seeding the questionnaire with conflicting statements in the same short grid question; including obviously fake brands in brand awareness and usage questions; CAPTCHA; overall completion length; tracking the time spent on individual questions; and reviewing open-end responses.

On this relatively brief, easy-to-complete questionnaire, we tossed out a whopping 46% of the responses as unacceptable.

Now, you’re probably wondering, “Are they holding survey respondents to an impossible expectation of perfection?” The answer is no. We recognize people make honest mistakes. They’ll hit the wrong response, misinterpret a question or get distracted on a particular screen. That’s why my firm uses a multi-step quality process. 

Certain problems result in immediate exclusion. An open-end response of “It’s good – I like it” or “lkjkljkjljkjlk” to a question about who their favorite celebrity is strongly indicates a bogus respondent. So does spending two seconds on a question that should take a minimum of 20 seconds to read and answer. 

For other quality concerns, we mark each problem in the data. For any respondent, one problem is ignored. Two problems gets them reviewed. Three problems gets them reviewed very carefully and four problems results in exclusion from the study.

You might quibble about the exact numbers in our approach (e.g., should someone be automatically excluded at three or five problems rather than four?), but you can’t quibble with what we found: Among 1,909 respondents, we determined 1,029 to be valid, while 880 had problems so numerous or so obvious that we considered them to be bogus. 

To demonstrate that the respondents we marked as bogus actually don’t belong in our data, consider just five ways valid and bogus respondents differed:

  • The proportion of our valid respondents who strongly feel the U.S. should strictly limit immigration was 28% but 62% among the bogus respondents. Worse, among the bogus respondents, 70% also strongly felt the U.S. should have no limits on immigration.
  • The average reported monthly spending on medical care was $568 among our valid respondents but an astounding $9,069 among the bogus respondents.
  • Brand awareness for Charity Navigator (a charity watchdog organization) was 15% among our valid respondents but 58% among the bogus respondents.
  • While 9% of valid respondents felt very familiar with Regions (a Southern financial institution) the number was 24% among bogus respondents.
  • When given a 200-word concept statement to read and evaluate, the average valid respondent spent 80 seconds reading it, while the average bogus respondent spent just 11 seconds (which comes out to about 18 words per second).

We have more examples in the full report but this should suffice to give you pause about the data quality you’re getting. Just knowing brand familiarity for Regions was nearly three times higher among bogus respondents than among valid respondents should raise concerns about your last branding study. Bogus respondents also claimed brand familiarity that was 44% higher for Citibank, 29% higher for Bank of America and 250% higher for M&T Bank.

But wait (as you hear on infomercials) – there’s more! Huntington Bank has branches in seven Midwestern states. Among valid respondents, 22% who claimed to be very familiar with Huntington Bank lived outside that seven-state footprint. This is not hard to believe, as those respondents may travel to one of Huntington’s markets, live within one of its media markets (and see its advertising) or have previously lived in one of its markets. Much harder to swallow is that among the respondents we identified as bogus, 75% who claimed strong familiarity with the Huntington Bank brand lived outside that bank’s service footprint.

Still think we just have unreasonable expectations for respondent quality?

Certain populations

While quality is a major concern for all panel studies, it’s particularly an issue with certain populations. We have consistently found that men and younger respondents are substantially more problematic. In our study with Harmon Research, 46% of all respondents were identified as bogus but the numbers were 52% for males, 58% for non-whites (Black, Latino, Asian, Native American) and 62% for panelists under age 35.

Are these demographic groups just more likely to be fraudulent or disengaged? Not necessarily. We don’t even know that a panelist claiming to be male, Black and 28 years old is actually any of those things. Panels tend to have greater difficulty getting members from these groups. It’s likely that some bogus panelists are registering themselves as respondents who are more desirable for panels and more likely to get multiple opportunities to participate in studies. But if our research has you concerned about panel quality in a gen-pop study, how much more worried should you be if your target respondents are young men? It’s likely that the majority of your respondents are bogus.

Harmon Research fields over 40,000 interviews per month through various online panels. President Joey Harmon estimates that of the various studies his company conducts for clients:

  • 25% don’t evaluate verbatims for bogus responses beyond obvious gibberish such as “kjkjkjkljljk”
  • 50% don’t eliminate straightliners (i.e., those who agree strongly with every statement or mark every brand as very familiar)
  • 90% don’t track the time spent on individual questions
  • 90% don’t evaluate numerical open-ends for obviously bad responses (e.g., answers of “12345” or “4444” to how much they spend each year on health care)
  • 95% don’t include fake brands in awareness or usage questions
  • 95% don’t take the approach of going line-by-line through the data to identify and eliminate bogus respondents

So we’ve identified three basic facts that are critical to your online panel research. One, almost half of the respondents in a typical study are bogus. (After completing the report we went back to some recent online panel studies Grey Matter Research completed for clients using many of these same techniques and found the 46% we eliminated to be pretty consistent with prior research.) Two, up to 95% of researchers aren’t taking sufficient steps to identify and eliminate these bogus respondents. And three, failing to eliminate these bogus respondents from your studies can have a strong impact on your findings.

This leads to the obvious question: How do I fix this? Getting reliable data from online panels is not a one-step process. It involves work before, during and after the field.

Before the field

Good data collection starts well before the fieldwork. In any questionnaire, you must include multiple traps that will help you identify and weed out bogus respondents. Exactly how you do this will vary according to the content of your questionnaire. For example, you can’t include fake brands if you’re not asking brand awareness or use. 

It’s highly important that whatever traps you set aren’t traps that bad respondents can easily avoid or that good respondents will accidentally fall into. Let’s use the fake brands as an example. They must be names that aren’t easily confused with real brands. If you’re asking brand awareness for charitable organizations, it would be unwise to include World Poverty as a fake brand, because it’s too easy for legitimate respondents to confuse this with World Concern, World Vision or others with a similar name. When we include outlandish brands such as the Ira Wozzler Foundation, we still often get 10-15% claiming to be very familiar with the organization. That makes it an effective trap.

Some traps are designed to disqualify respondents immediately from the questionnaire during the field process; others are designed to allow us to catch problems post-field in data quality checks.

During the field

A good field partner will be constantly monitoring your survey data for problem respondents. Harmon Research reviews data daily for speeders and bad open-end responses and its questionnaire programming allows us to employ techniques for catching bogus respondents. Grey Matter reviews the raw data multiple times throughout the field process, going through the data with our scoring system to identify respondents with multiple problems.

Working on quality while the study is in the field allows you to maintain good quota integrity and make sure things such as establishing separate cells with equal demographic distribution for monadic testing proceed smoothly.

After the field

Even with the efforts you put in while the fieldwork is in process, you’ll still need to do a final review of the data on the respondents who were added after your last in-field check. We’ll often exceed the final sample size, knowing we’ll be tossing out bogus respondents in the final data file. Doing this allows us to stay on schedule, rather than having to go back into the field to get another 53 completes to replace the ones we eliminated in the final data check.

We also review what measures did and did not work so we can continually sharpen our ability to eliminate bad respondents and ensure more accurate data. This is how we’ve come to understand which methods perform well and which don’t contribute much to the process.

For instance, we have consistently found that “red herring” questions (e.g., “Please mark the box all the way to the left”) are not useful. They stand out from the other statements in a grid so they’re easy for bogus respondents to spot and answer correctly as they speed through your questionnaire. Bots can be programmed to recognize these and respond correctly. On the other hand, we try to include at least one open-end that every respondent receives, as the quality of these responses is often a good indicator of the overall quality of the respondent.

Takes effort

Yes, this takes an awful lot of effort. But if you want reliable, accurate data in your studies, you cannot ignore the field and just leave it to your research agency or the panel companies. If you directly manage the data collection with a field agency or panel company, you must institute these measures as part of your questionnaire design, programming and data collection. Some companies will program your questionnaire and follow your field instructions very carefully; they’re really an extension of you as the researcher. If you’re not creating or requesting measures to ensure data quality, they can’t do it for you.

If you work through a research agency, you need to have a frank discussion with them to understand exactly how they’re ensuring your data quality. Are there traps in the questionnaires they craft for you? Are these traps uniquely designed for each project to fit the overall questionnaire and respondent profile or just boilerplate measures like the same red herring question tossed into every questionnaire? What measures did they use on your last study to identify and eliminate bogus respondents and just how many were eliminated? What testing did they use to arrive at those measures? If you can’t get satisfactory answers, you need to look for someone who can explain exactly how they will proactively safeguard the quality of the data collection on your studies.

Consider one data collection vendor (who understandably wished to remain anonymous) speaking about their clients’ lack of data quality efforts: “One end client goes in and looks at each open-end to make sure it makes sense. I don’t see other end clients get that involved. We see others such as a large bank never review their data. Or a technology client; they do not review their data at all and the research agency doesn’t share any raw data with them.”

More steps

Online panel data collection (for better or for worse) is how much of quantitative research gets done today. If it’s worth doing at all, it’s worth doing right. Unfortunately, too many projects are allowing bogus respondents to influence the findings. As our investigation demonstrates, it’s time to take more steps to ensure valid, reliable, usable data.