Faster than a speeding survey

Editor’s note: Terri Maciolek is principal and founding partner of Data Quest Analytics LLC, a Wynnewood, Pa., research firm. Jeffrey Palish is regional vice president of Epocrates Inc., a San Mateo, Calif., software firm. This is the first part of a two-part series. The second article appears in the July 2009 issue.

Picture yourself strapping on your helmet, pulling on your racing gloves and sitting behind the wheel of your bright red Ferrari. You turn the key and feel the exhilarating thrust of the engine as the green flag is lowered and you put your foot, full force, on the gas pedal. You smile. As long as you reach the finish line, you can drive as fast as you want and take as many shortcuts as you can find, right?

Such is the situation sometimes with online surveys. As we all know, critical decisions are often based upon online survey results and, in the pharmaceutical industry we often have somewhat of a blind faith that physicians dutifully and conscientiously respond to all of our many questions.

But, truth be told, physicians are humans too and, like the race car driver, they sometimes go too fast or cut corners in completing our online surveys. What can we do, as makers of the Ferrari, to minimize the likelihood of their actions jeopardizing the quality of our data? In this first of two articles, we examine the perspective of the “car” (i.e., survey vehicle or questionnaire) and what is being done to catch the “driver” (i.e., physician or respondent) who might be speeding or taking shortcuts.

There are essentially three dimensions for researchers (and clients) to consider: 1) making sure the driver has a valid license to drive; 2) catching drivers who go over the speed limit and/or fail to read the road signs; and 3) giving the driver a well-tuned car to drive. While physicians who respond to our surveys are generally safe and experienced drivers who provide the industry with reliable and valid survey data, we need to worry about the few who do not, as preserving the integrity of our data is paramount.

Building block

Clearly, verifying a physician’s “license to drive” is essential and without question, the fundamental building block for physician-generated data. For obvious reasons, online physician panels have brought new challenges in this regard.

There are numerous physician validation processes1. (See related article “Seeking the correct diagnosis” on p. 22 of this issue.) As an example, the Epocrates physician panel uses a process in which physician opt-in requests are checked against the American Medical Association (AMA) Masterfile (which is updated quarterly) to assure physician authenticity. Individuals attempting to register are taken through a series of questions (e.g., name, date of birth, medical school attended, year graduated, etc.) that attempt to verify the person in real-time. This information is checked against the Masterfile. If any of these data do not match perfectly, the registrant is informed that he or she could not be verified. If the physician passes the verification process, their AMA Medical Education Number is appended to his or her record and he or she is then eligible to be invited to participate in upcoming surveys.

Array of traps

The use of an array of traps to catch our crafty drivers should be standard at this point when conducting online surveys.

•   Knowledge traps can be used to determine if a respondent is actually the type of professional that he or she claims to be. For example, if a respondent is taking a survey as a dentist, you might place a question in the upfront qualifications section that queries the number of surfaces on a particular tooth.

•   Speed traps benchmark a respondent’s progression through the survey at various points against an average expectation (typically determined via a pretest). Speed traps serve several functions: 1) to pace or slow down an otherwise conscientious respondent; 2) to reduce the temptation of a potential speeder; and 3) to eliminate the repeat offender (and thus, invalid data) from the survey.

A speed trap is best initially placed about one-fourth to one-third into the survey (depending upon survey length). It flags speedy responders, makes them aware they are completing the survey at an abnormally fast pace, and asks them to please slow down and give thoughtful responses (it also gives the surveyor an opportunity to remind the respondent of the time commitment to which he or she has agreed). Another bump is similarly placed about halfway through completion (again, based upon an average time-to-completion benchmark) at which point the same “slow down” reminder is used, with a clear explanation that continued speeding will result in termination (and no compensation). Clearly, then, if speeding continues the respondent is terminated without remuneration and informed as to why.

•   Logic traps are intended to improve the quality of response via a cross-check process. The fundamental premise is that if a question is asked multiple ways, the answer should be the same and if not, perhaps the respondent is not keeping his eye on the road. Inconsistent answers provide another opportunity to warn the respondent to give thoughtful responses or risk termination.

•   Attention traps may indeed be the simplest to implement although, particularly with physicians, they need to be executed tactfully. For example, during a rating task you might insert an unrelated attribute or statement to be rated to flag straightliners or inattentive respondents. Here again, a gentle reminder to slow down upon first offense is recommended (but termination seems clearly warranted upon repeated offense with explicit warnings, as suggested above).

We suggest that it is incumbent upon the research agency to inform the panel provider of crafty drivers who are caught in these traps and terminated upon repeat and explicit real-time warnings. It is then the responsibility of the online panel provider to start a process by which the respondent is eliminated from that provider’s online survey invitations, should there be multiple transgressions.

Furthermore, real-time warnings seem essential in both the short and long term to (hopefully) have respondents slow down and be more engaged while completing a survey, as well as create a positive impact on future participation. Real-time warnings and traps are fair and provide opportunities to reiterate the rules of the road. That way, respondents cannot complain when they are pulled over and given a ticket for violating the rules (i.e., speeding and/or cheating).

As a final safety net, it seems prudent to also check the data on the back end after all surveys are completed using a good data cleaning process. Each respondent data record should be manually checked for straightlining, replicated answer patterns, illogical answers, etc., as inevitably, one or a few aberrant respondents will beat the real-time traps and warnings yet still provide invalid or unreliable responses. As such, it is often a good idea to anticipate (both tactically and financially) the need to oversample by a few respondents to allow for the discarding of suspected bad drivers while still allowing for full quota fulfillment.

Broken-down jalopy

While we as researchers and panel providers must ensure that the “bad guys” are caught, we must also remind ourselves that nobody wants to drive a broken-down jalopy. No physician providing his or her time and effort to complete an online questionnaire wants to suffer through an uninspiring or confusing survey.

Good survey design and question-writing cannot and should not be circumvented or ignored in online surveys. The basics still apply, such as questions that are clear, non-repetitive, grammatically correct and concise. The length of the survey itself (time to completion, number of questions) must also be acceptable, clear and specific in the invitation to participate. In addition, with the capabilities of the online medium, it is easier to deliver surveys that are visually pleasing, enjoyable and, especially for physicians, intellectually stimulating.

In part two of our article next month, we’ll share with you physicians’ perspectives on what tempts them to speed or take shortcuts and what we can do, as an industry, to make online surveys a positive experience.

1 Frost & Sullivan White Paper “Unmasking the Respondent: How to Ensure Genuine Physician Participation in an Online Panel.” December, 2008.