Beating the cheaters

Editor's note: Debbie Balch is president and CEO of Elevated Insights, a Colorado Springs, Colo., research firm. 

We all know problematic respondents are a third-rail issue in the marketing research industry – quality issues aren’t discussed or addressed publicly with the concern the research field could lose credibility. Knowing there are many professional respondents impacting the quality of data, it is clear that a somewhat small proportion of the population is accounting for a large percentage of the responses.

Many different sources cite the degree to which quantitative data may be problematic and, while they vary individually, most seem to hover around the 15-20 percent mark. Although harder to quantify, we understand problems also exist in qualitative data as well – I have seen it firsthand. Some are even brazen enough to blog about it, confessing their qualitative tricks online.

Technology has lent research increasing levels of agility; sample and panel companies have massive reach and can get research feedback in a relatively short time frame. Simultaneously, corporate researchers are operating in an increasingly competitive and fast-moving environment, pushing their research partners for immediate data to influence actionable business decisions.

But there has to be a balance and we in the industry must hold each other accountable.

I believe the answer to this ongoing fight for data quality requires a collective effort by the industry as a whole to improve. Corporate researchers should seek partners that are committed to quality and researchers should seek data collection and sample partners that are committed to truthful responses.

Quantitative research: issues and approaches

Online surveys currently dominate the quantitative market research space and for good reason. We, as researchers, can get feedback from thousands of people from all over the world in a matter of hours. But the anonymity of online surveys and the ability to quickly create an e-mail and an online presence has had a huge impact on the validity of data collected. Just a few of the challenges we face as researchers include: leading or price-focused advertising; professional respondents; poor sample quality; untruthful responses; lazy/inattentive respondents; bots/autofill software. We will address some of them below.

Leading or price-focused advertising. Advertising messaging can be the catalyst for the poor response types we should be primarily concerned with. They cause potential participants to enter a survey with an expectation of earning easy money with little effort and when the credibility of the marketing research industry is rooted in honest, thoughtful insights, this is a serious issue. With leading copy points, these kinds of recruitment efforts can cue the desired type of respondent and cause some to change their answers in order to qualify. Because online chatter continues to degrade the quality of responses, we need to be very careful of who completes our surveys. 

Professional respondents. An important consideration is that professional respondents may be even more prevalent in quantitative research (vs. qualitative) because they have the luxury of hiding behind a computer screen. According to recent studies and professional industry resources, 42 percent of North American respondents claim to participate once a week or more often and this doesn’t even take into account underreporting. 

Many professional respondents provide honest, thoughtful answers; however, if the goal of a professional respondent is to take as many surveys as possible, then their path toward that goal is likely to intrinsically include methods that undermine data quality. While qualitative screening also has its challenges, you could say screening or terminating unqualified respondents in online surveys is even more difficult. Rigorous adherence to screening, design elements and data cleaning help filter out these respondents. 

Choosing quality sample providers. An integral way to promote data quality is to choose sample companies of the highest caliber. Seeking sample companies that implement some of the following techniques is important:

  • geolocation checks;
  • device fingerprints;
  • participation limits;
  • cross-reference – validate the respondent information via other databases or lists;
  • unique ID – give respondents a unique ID or code so only those invited can take the survey;
  • validated sample – there are third-party companies that make some of these checks and others for various panels. Often the sample is then referred to as validated sample. Yes, you may be able to find cheaper sources but as sample is a relatively small cost in an overall turnkey research project, this is not an area to cut corners.

Survey design. Outside of doing our due diligence and buying the best sample, we, as researchers, are also tasked with controlling how we design our surveys. Most respondents want to be honest and provide good information but we have to make sure the conditions they are under promote this. The conditions in this case are the components and design of the survey we create. 

In general, we can find success by including variety through visual items, relatability with a conversational tone, a reasonable survey length and mobile-optimized questions. Some more-specific design elements that can be employed (where applicable) include: honesty pledges; time-spent requirements, specifically on the page level; include fictional items/brands (red herrings); consistency checks; Captchas – these can stump most bots/autofill software; do not clearly link disqualifications to the exact question in the survey.

Data cleaning. Despite choosing quality sample and implementing the survey design pieces that we’ve talked about, things will fall through the cracks. It’s imperative that data is cleaned thoroughly, no matter how tedious it may be. Data should be cleaned against the following criteria: duplicate e-mails; duplicate IP addresses; survey speed; open-ended responses; consistency.

While many solutions can be automated, the human eye is still the best way to find poor responses. It’s important to note that this cleaning process is not completely objective; watching articulation, straightlining and logic allow us to catch poor responses. In addition we don’t want to flag people simply because they gave a response that doesn’t make sense to us personally. You may find increased success looking for patterns of inconsistency throughout individual replies in an effort to find poor responses.

Qualitative research: issues and approaches

Qualitative research has its challenges as well, most notably in lazy recruiting, online qualitative respondents who misrepresent who or where they are and professional/posing respondents for in-person qualitative research.

Lazy recruiters. Recruiters who are focused primarily on filling their recruitment quotas and not on the quality of participants can have an extremely negative impact on the validity of collected data. Employ some key processes to maintain response integrity, such as: utilize articulation questions; establish relevancy to the topic; be aware of respondents who already know each other.

Having an impossible recruit be magically filled the last day, multiple respondents who work in the same industry, respondents who know each other and/or respondents who aren’t articulate are often good indicators of lazy recruiters and should raise a red flag. For example, having six hairdressers or five medical technicians in one group – when you aren’t specifically recruiting respondents who all work in the same industry – typically indicates the recruiter was recruiting from a list.

Professional respondents. Professional respondents can be especially tricky in qualitative research since they know how to fly under the radar, disguise their frequency of participation and provide intentionally vague or brief answers. As researchers, we must employ as many tactics as possible to stay a step ahead. Several solutions help identify and/or discourage these participants: 

  • Work closely with recruiters to highlight your concerns.
  • Limit past participation and ensure the recruiters you work with scrub their lists.
  • Compare profiles and personal information.
  • Consider “virgin” respondents who’ve never participated in research before.
  • Be open to tier-two facilities – they can provide an attractive solution for in-person qualitative research as less respondents may know how to “play the game.” 
  • Request respondents bring in their qualifying product to show they are true users.
  • Only pay on-time respondents and encourage respondents to arrive early by offering an early-bird drawing for anyone who arrives at least 15 minutes before the group is scheduled to start. This time with the respondents before the group starts can be used to rescreen and confirm consistent responses. 

Posers. These respondents tend to be yes-people, claiming to have purchased or used every product, participated in every activity, etc. During the screening process, opt for open-ended brand usage questions whenever possible as a deterrent. When not possible (in fragmented categories, for example), have them describe the package, product, etc. Build in traps like fake brands to highlight the posers and, when possible, request photos of their pantry, liquor cabinet, car, etc., instead of asking what brands they’ve bought or used. This may be invaluable to eliminate posers from your group.

With online studies, require respondents to upload videos to verify they are who they say they are. It is a good idea to incorporate this “get to know me” video activity as Day 1 of an ethnographic online effort – this affords you plenty of time for replacements if they’re not who you expected them to be. If the study is product- or brand-specific, ask them to include the product in their introduction video to ensure they are true users.

Passive respondents. In person, these respondents typically sit quietly in the group – they give short answers and often agree with another respondent instead of giving their own answer. If time permits, it is helpful to make an effort to talk with each respondent before the group. This can help identify the passive respondents early on to be excused. If they make it into the group, call on passive respondents and encourage them to share their opinions to improve results. 

Online, passive respondents usually provide very brief answers and often don’t upload images or videos. There are several tactics we can implement to aid in quality participants and/or their responses: 

  • Include at least one open-ended question to determine if they are willing to give a full sentence or not.
  • Limit the number of respondents each moderator has so they can interact with each respondent on a daily basis – demonstrate someone is reading their responses.
  • Communicate in their preferred manner, whether that be text or e-mail, to encourage better participation.
  • Only pay for each completed activity (vs. total participation) to encourage respondents to complete every activity.

Keep track of poor respondents

As a last note on these quantitative and qualitative issues, I think it’s imperative that we, as an industry, keep track of poor respondents. As the researcher, let the sample company know who gave a bad response. And as the sample company, keep track of poor responses so that you can remove people from your panel after repeat offenses. This will slowly help to weed out the cheaters and keep them from impacting future studies. This is an industry call to action and we must all hold each other responsible.