How to spot a fake

Editor’s note: Shawna Fisher is director, marketing and panel development at Common Knowledge Research Services, Dallas.

The careless and sometimes dishonest behavior of online panelists has recently come into the industry spotlight. This is reflective of the maturity of online research as a methodology. Minimizing the effects of this behavior is a multi-stage process involving questionnaire design, data collection and panel management. It not only requires putting up roadblocks to deter those who would “game the system” but also a commitment to setting reasonable expectations for just how much information can be extracted from a respondent in one survey (before boredom or fatigue sets in).

Satisficing is the term commonly used to describe the actions of several groups of less-than-dedicated respondents. To satisfice is to satisfy the minimum requirement in a given situation - to do no more and no less than absolutely necessary. Researchers have been aware of satisficing behavior among research respondents and its detrimental effect on the quality of research data for a long time. However, before discussing how to counteract satisficing, it is important to first understand the reasons behind respondents’ satisficing behavior.

Satisficers come in at least four different varieties:

  • The hurried respondent: She is late for a meeting, she is hungry, or her child is shouting at her from the kitchen - in other words, her attention is not on the survey.
  • The inexperienced respondent: He does not have the required level of knowledge or experience with the survey subject to provide thoughtful answers. The questionnaire assumes that he has more knowledge and experience with the topic than he does.
  • The irked respondent: She is aggravated or fatigued by a long survey, a seemingly endless grid or irrelevant or uninteresting questions.
  • The imposter: He is taking the survey for the sole purpose of receiving the incentive. His experience (or lack of it) with the subject matter is irrelevant to him; he just wants the reward. “He” may not even be a person but a program that completes surveys for the truly dedicated imposters.

The first three types are generally sincere. Their good intentions just got derailed by poor survey design or external distraction. These groups are known as weak satisficers. All respondents - even the “good ones” - may engage in weak satisficing at some point.

The imposter segment (also known as gamers) is the most troubling of the four types. These respondents are known as strong satisficers. Their motivation is driven by the desire to get the incentive - regardless of survey design, length or topic.

In an online research panel, satisficing behavior among online panelists takes several forms, such as: filling open-ends with nonsense text; straight-lining on grid questions; completing the survey in substantially less time than the average completion time; providing illogical or inconsistent responses; or selecting all available answer choices on multi-select questions.

No single type of satisficer has a monopoly on any of these behaviors. They all do these things, though their motivations for doing so differ.

The following steps either minimize satisficing behavior or make it easy to spot the satisficers so they can be removed from the sample before a study is completed. These best practices in survey design, data cleaning and panel management to reduce satisficing are becoming required practices among the best online panel providers.

Survey design

Following are some intelligent survey design practices along with a few tricks help to identify satisficing behavior.

The first trick is to include low-incidence products and services in answer choices for ownership/usage screening questions. Respondents who select too many low-incidence items should be viewed with suspicion. This is one way to catch imposters who are trying hard to qualify for studies. This method will also catch some of the other types of satisficers who are too fatigued or distracted to realize what they are selecting.

A similar technique used to ferret out the imposter is to include bogus products/brands in questions screening for ownership or usage. Be warned, however, that if bogus item names bear too much resemblance to real products, there could be cases of honest folks selecting those bogus choices, mistakenly thinking that they are the real thing. On the other side of that coin, the more sophisticated respondents may notice the fake choices and question the researcher’s credibility or intent. This technique should be used sparingly.

Note that when low-incidence product/service or bogus product/brand questions are used as screeners, it is recommended not to terminate respondents who provide suspect answers right away since doing so, over time, will clue them in as to the right way to answer. It is best to include several other questions in the screener to help disguise the threshold for qualification.

Another technique used to filter the imposters and the other types of satisficers is the placement of various types of trap questions in ratings grids (see Figure 1). Verification ratings - such as “Please verify your place in the survey by selecting the third answer choice from the left” - help to flag respondents who are not reading the question, regardless of what type of satisficer they are. A similar technique to catch the straight-liners is to include both positive and negative statements of an idea or attribute in the same grid.

Repeating a question later in the questionnaire using a different presentation or opposite scale can also be used to flag inconsistent responses that could suggest satisficing.

Other survey design issues that help reduce satisficing behavior, particularly among the unwitting types who aren’t out to cheat the system, are appropriate survey length and topic relevance. It is generally agreed in the industry that longer surveys often yield less reliable data. This is one of the fundamentals of questionnaire design, but it’s an easy one to overlook when dealing with evolving objectives and the plea for “just one more question” from budget- and time-restricted clients.

Topic relevance is another fundamental of good questionnaire design that should be carefully considered. Fortunately, with the growing sophistication in the arena of panel profiling and targeting, this is getting easier to do.

Many panel managers and online sample providers have little or no influence on questionnaire design, which is often handled by their clients. Yet the onus is on sample providers and survey programming services to educate their clients on the pitfalls of ignoring the basics of solid questionnaire design. Ultimately, neglecting these practices leads to lower-quality data. At some point, quality can fall so low as to put business decisions at risk.

Data cleaning

The beauty of online surveys is that while a survey is in progress, responses can be monitored as they come in and suspect responses can be quarantined. Upon examination, those responses deemed unreliable can be tossed out, allowing sample quotas to be adjusted.

The first thing to look for when examining the data is straight-lining on grid questions (see Figure 2). An instance or two may be valid, but often, straight-lining is a red flag that indicates a respondent is satisficing.

Selecting all answer choices on ownership and/or usage questions is another sign that a respondent is really only interested in qualifying for the incentive. Again, an occasional instance of this pattern may be valid, but often it is a warning.

Another sign of satisficing behavior is responding to open-ended questions with nonsense or illogical information.

Timers can also help detect satisficing behavior, as one of the common symptoms of satisficing is completing the survey much faster than what would be required to actually read and comprehend the questions. The challenge in setting expectations for timing is that skip patterns can wreak havoc here - some respondents will finish much faster for legitimate reasons. Setting appropriate timing expectations on a per-page basis helps to avoid labeling legitimate responses as garbage. Another caveat to timers is that the more surveys a panelist has taken, the more likely he or she is to finish slightly faster than the average completion time. This may be a result of increasing proficiency in navigating survey questions over time.

Panel management

Minimizing satisficing behavior isn’t limited to the realm of the questionnaire or data. Panel management techniques can support the efforts made in questionnaire design and data cleaning.

First of all, an ID verification process should take place at the recruitment stage. Upon registration, panelists’ e-mail addresses, mailing addresses and demographics can be checked against current panel membership for duplication. Mailing addresses can also be verified with the USPS database to ensure that they are valid. And finally, new panelists’ registration data can be compared with internal blacklists of known imposters.

Managing panelists’ participation in surveys is crucial in helping to limit satisficing behavior by limiting the opportunities respondents have to engage in the behavior. Limiting the number of invitations to surveys helps to weed out the panelists who are looking to maximize incentive gains and may help keep the weak satisficers from getting fatigued. Going one step further, limiting the number of surveys that panelists can complete in a given time frame also supports the effort. Panelists who respond to every survey invitation should be treated with skepticism and removed from the panel.

It could be argued that cash incentives are a motivator of strong satisficing behavior. Alas, cash incentives are strong motivators to participate in research, period. Given that response rates are falling on the whole, many panel managers and sample providers are loath to step back from the “cash-is-king” philosophy of incentives. Cash as a research incentive is here to stay.

The way cash is paid can make it difficult for imposters to commit incentive fraud. Paying cash in the form of a check mailed to the respondent’s verified home address puts up a roadblock to those who would rent multiple mail boxes (or set up multiple bank and PayPal accounts) for the purpose of maintaining multiple panel memberships.

Setting the threshold

Be warned that the actual application of these techniques is trickier than it appears to be. One of the risks associated with these techniques is that honest respondents (weak satisficers) will get disqualified from studies, or worse, thrown out of panels, when their behavior isn’t completely their fault. The most challenging aspect of a strategy to minimize satisficing behavior is setting the threshold that separates the imposters from the merely inexperienced, irked and hurried respondents. Panel managers and sample providers have to decide how many instances of satisficing warrant elimination from a study or from the panel, considering survey length and complexity.

Another caveat is that the addition of various question traps makes surveys longer, and, over time, the imposters will get wise to the traps set by researchers. What is effective this year may need to be completely redesigned for next year.

Perhaps the biggest caveat is that clients must not assume that all online sample providers are taking these steps. Clients should ask their providers what steps are taken to minimize satisficing before the start of a project, preferably when choosing a supplier.

Accept responsibility

Clients receive the benefits of higher data quality and more reliable results as a result of efforts made to identify and minimize satisficing. However, clients must be willing to accept part of the responsibility for these efforts in order to reap the greatest rewards. Clients need to develop an appreciation for the respondent experience and how questionnaire design influences satisficing behavior.

Now that online research is becoming the most prevalent form of quantitative data collection, researchers and clients have come to expect more from the medium in the form of longer and more complex surveys. This wasn’t so in the early days of online research, when a five- or 10-minute survey was the norm. Now, many online surveys top 40 minutes. Just because a 40-minute survey can be done over the Web doesn’t mean it should be done regularly - both satisficing behavior and the number of incomplete interviews increase considerably at the half-hour mark.

Work together

Sample providers have opted to fill the growing demand for online research. In many cases, they have done so without protesting the increasing survey length or complexity. Sample providers (and survey programming services) must take responsibility for educating clients on the consequences of long, difficult surveys. It’s a tough road ahead, but success will be achieved when clients and providers work together to minimize the effects of satisficing behavior.