Editor's note: Keith Phillips is senior methodologist with SSI, Shelton, Conn.
How do we, as researchers, decide on quality data? We usually look at the behavior of participants within a questionnaire itself. We look at how long they took to complete the survey, what they wrote in the open-ended questions, whether they gave the same answer option through a grid question, if they gave consistent answers and we look at quality-control questions. These quality-control questions are designed to measure attentiveness and remove participants who are not paying attention.
There are different types of quality-control questions. Some are simply inserted within a grid and ask participants to select a specific punch; others measure quality by allowing participants to contradict themselves. Some intentionally misdirect participants, so that the question being asked is in the detail of a long instruction and the question is not at all what it seems to be.
Quality-control questions assume that participant misbehavior in a particular moment is indicative of misbehavior throughout the entire survey. For this reason, the data quality is improved with the omission of this participant. An alternate assumption is that a degree of inattentiveness is normal throughout a survey and participants may be attentive during the trap question but not during key measures. Conversely, those failing the trap may not have been paying attention in the moment but are contributing elsewhere, meaning many are no different than the participants that are being kept.
Working at a sample provider, I see many questionnaires for a variety of industries conducting an array of research. One thing I have noticed is the variety of quality-control measurements used to validate the online self-completion surveys; in particular, the varying amount of participants being excluded due to poor data quality, which was sp...