Editor's note: Kevin Waters is research technologist at the National Food Processors Association (NFPA). Under the National Food Laboratory, a contract research subsidiary of NFPA, he specializes in sensory, consumer, and marketing research studies of consumer packaged goods.

With the number of research studies occurring everyday, the chance of being screened for participation in a study is increasing. Just as the chance of participating in a research study increases, so does the likelihood of individuals participating in multiple studies over a specified time. This is particularly inherent with organizations who screen and recruit from lists or databases of respondents.

Re-use of respondents in research studies results in "test-wise" individuals. One could hypothesize that "test-wise" individuals, as opposed to "naive" individuals, generally can anticipate what might be asked of them in research studies, particularly during the screening process. Excessive knowledge of screening questions might encourage panelists to falsely answer questions for the sake of being included in the study.

With this in mind, screening questionnaires should be designed to maximize the likelihood of obtaining honest answers from respondents to various selection criteria questions. Examples presented in this article include the addition of a "dummy" termination question, the use of dummy variables or categories, and/or the use of an open-ended question instead of a closed-ended one. Utilizing these techniques, and more importantly, periodically changing them, may tend to keep all prospective study respondents guessing and create less routine screeners. Since the screening process is an integral part of any research study, the additional time to screen via the format discussed in this article should be considered insignificant compared to the benefits and reassurance it could provide.

Dummy termination questions

One example designed to minimize the routine nature of screeners, and also reduce dishonest answers from study to study, is to incorporate a dummy termination question which is read to respondents immediately after they terminate for any one of the study criteria questions. In doing so, the study criteria are in a sense being concealed, perhaps decreasing the chance of false answers from respondents screened for future studies. Thus, if a respondent indicates that they have participated in a research study within the past six weeks and the study criteria requires no past participation within the past three months, the dummy termination question would be asked prior to actual termination of the respondent. Regardless of how the respondent answers the dummy termination question, the interviewer is instructed to conclude the conversation at that point (e.g., "Thank you for your time" or "We already have the quota filled for that category").

The dummy termination question should be straightforward; as an example, "How many children do you currently have living in your home?" Changing these questions from screener to screener is recommended to avoid patterns which may prompt false answers from respondents. Failure to change such questions would lead to the routineness issue already discussed.

Age

Appearing in just about all screeners, questions pertaining to age often create reluctance, and possibly a greater likelihood of dishonest answers, among respondents. Standard phrasing of age questions on screeners tends to clue respondents into the categories that are considered terminates. A typical example might be: "Into which of the following age groups do you fall? 25 and under?, 26-35, 36-45, 46-55, 56 and over." The nature of this question suggests that the broad "25 and under" or "56 and over" categories probably are not meaningful to the study. Realizing this, a respondent might decide to stretch the truth and state a category with the narrow, more defined range. These, of course, are respondents who should not be a part of the study. One means of minimizing this is to include dummy age ranges in the response categories. For example, using the information from above, the response categories could be increased from 5 to 7: "17 and under, 18-25, 26-35, 36-45, 46-55, 56-65, 65 and over. "Making it more difficult for respondents to outguess the system will make it less likely that inappropriate respondents will find a way into your study.

Competitive employment

As researchers, we want to be careful to exclude respondents who are employed in competitive fields. One way of dealing with competitive employment is to read a list of occupations of concern along with one or two dummy occupations. This uses the logic discussed above for age; that is, panelists screened previously might be cognizant of the fact that certain occupations lead to termination. It is these panelists who may falsely answer "no" to the routine list of occupations which typically appear on screeners. Examples of occupations that are routinely mentioned in screeners are advertising agencies, marketing research firms, manufacturers of the product/service of interest, public relation firms, newspapers, and TV or radio stations. Respondents who indicate "yes" to these occupations are terminated from the study.

A way of minimizing false answers in this area is to not only include, but also periodically change, the dummy occupations. For example, the above list can be lengthened by one or two occupations, preferably using a rotation scheme to change the starting point. The dummy occupations should be unrelated to the list of actual occupations to keep the respondents on their toes. Changing them periodically, such as every six months, will minimize the routineness that the question prompts. When using this format, a lead-in that works nicely is: "We are interested in talking with people who work in various industries. Do you or does anyone in your household work for the following?"

Past participation/recency of use/ recency of purchase

Most organizations or groups sponsoring research studies specify that respondents must not have participated in a study prior to the one in question within a certain time period: e.g., 1, 3, 6 months. It is interesting to note the large number of screeners that phrase questions in the format of, "Have you participated in a research study within the past X months?" This fairly clearly specifies to the respondent that X months is the critical parameter for this question. Respondents who realize this could be more prone to answering dishonestly. A better way to handle this question is to replace the yes/ no question with an open-ended one: "When was the last time that you participated in a research study?" If the respondent answers within the X month period, they continue to the next question; if not, they are terminated.

The same method can also be applied to criteria pertaining to recency of using or purchasing goods, services, etc. For example, if the study of interest requires people who have used chunky peanut butter within the past six weeks, a question phrased in an open-ended format would probably prompt more honest answers. Thus, the question "How long has it been since you have consumed chunky peanut butter?" is preferred over "Have you consumed chunky peanut butter within the past six months?" The former prompts an infinite number of answers with less chance of false information, whereas the latter may clue respondents into the critical time parameter and perhaps prompt a false "yes."

A second method, pertaining to the recency of use/recency of purchase questions, deals with concealing the good or service being screened for by incorporating one or two dummy goods/services. For example, "For each of the following products, please indicate how long it has been since you have consumed them: apple juice, peanut butter, tortilla chips." Response to the dummy products would be disregarded, and the answer to the product in question would be considered in the continue/terminate decision. Concealing the product, as above, reduces the chance of revealing the product being tested, which in turn could minimize dishonest responses, particularly when phrased in an open-ended manner.

Attitude toward product/service

It is also common to confirm that respondents have a positive attitude/interest in the good or service being researched.

By this point in the screener, respondents have answered and qualified for most criteria required for the study. They realize if they answer another question or two that an invitation to participate in the research study will be likely, with a chance of receiving some form of incentive.

With this in mind, it may not be prudent to ask an attitude question only on the product or service to be tested, as it may invite an answer intended to get the respondent into the study instead of the respondent's honest attitude. To avoid this, dummy products should be incorporated into the list.

Thus, if one wanted to confirm that respondents had a positive attitude toward maple syrup on pancakes, rather than simply eliciting a response for this product, responses for two additional products could be obtained, such as ketchup on french fries and strawberry jam on wheat bread. As before, responses to the dummy products are disregarded. This would conceal the product of interest, assuming that questions in the interview prior to this one have not suggested otherwise. Also of importance is choosing appropriate dummy products to avoid a list with an outlier. For example, it may not be wise to include such items as "orange soda" or "grape soda" with "maple syrup on pancakes." Of the three, maple syrup on pancakes is clearly the odd sample, which may affect how a respondent answers the attitude question.

Conclusion

The screening process builds the foundation for reliable and valid data collection. While it is not appropriate to trick respondents during the screening process, it is important to elicit honest answers from perspective respondents. This can be accomplished with properly designed and well-thought-out screening questionnaires. Some of the information discussed above could be considered to achieve this, particularly if those interviewed are, or are on the verge of becoming, "test-wise." While the points discussed in this article may lengthen the screener, the additional time is generally minimal and worth the extra expense. This is particularly rewarding if it leads to honest answers, more valuable data, and fewer respondents who disqualify at the time of the actual study when rescreened based on questions asked during the initial screening process.