Can we do better screeners? Of course!

Editor’s note: Chris de Brauw is executive vice president at Fieldwork, Inc., a Chicago research firm.

In an organization the size of Fieldwork, our 15 offices see literally thousands of screeners each year. Some are of better quality than others. Faced with today’s competitive environment, qualitative research users appear keener than ever to get exactly the right respondents in their groups, IDIs and ethnography studies. As a result, recruiting gets exacting scrutiny. So the tool we give the recruiters represents a key stepping stone towards a successful project.

As the users of so many screeners we thought it might be a good idea to do a bit of analysis and come up with specific suggestions on how the process can be improved. To this end, we asked each Fieldwork office to complete a questionnaire with the following questions.

  • Screener length: At what point is a screener too long? For example, so long that people stop being cooperative and/or stop listening to the questions? Do you have a rule of thumb (x minutes, or x number of pages, or x number of questions)? How many “for information only” questions are reasonable?
  • Question order: What is the better order of questions? Have the narrow qualifying questions early on, and then get general background data? Or the other way around: Get the broader background data first, and only then the real qualifying questions?
  • Quality of questions/consistency of answers: What types of questions are “impossible” for respondents to answer accurately and/or consistently (i.e., that may cause different answers when people are being re-screened)? What type of info is more likely to change between screening and re-screening: behavior or attitudes? What kinds of questions invite untrue answers? If you have some examples, that would be great.
  • Articulation questions: Which ones are “impossible” to do, and/or intimidating for respondents? Which ones tend to work?
  • Homework assignments: What types of pre-group homework assignments are burdensome for respondents, and what are the kinds of homework assignments respondents love to do? What techniques are available to make sure respondents do their homework in the manner that was intended? What are the things to avoid when requesting respondents to do homework?
  • Algorithm screeners: What are some of the more and less effective ways to set up algorithm screeners? Which ones tend to be doable and which ones tend to become impossible? What should we advise clients to consider when looking for respondents on the basis of an algorithm?
  • Recruiting for ethnographies/re-mote locations: What special considerations are there for recruiting respondents to participate in a study in their home, while shopping or in some other venue? What are the dos and don’ts?

There was tremendous interest in our organization in this project. In most of the offices a group of supervisors and recruiters created their answers collectively.

The following summarizes the responses, with the exception of the responses to the question about algorithm screeners. This topic deserves a separate article, which we will prepare in the near future.

Highlights of the findings

A screener should serve as a screener, only or primarily.

It can include some “for information only” questions, but a screener is not a data collection tool. It should identify the right people to provide the desired information. It should not complete most of the interview ahead of time. Qualitative screeners rarely produce usable data, in the way a quant study does.

A screener and screening interview is a form of communication in and of itself.

As respondents progress through the questions, the screener will reveal the topic of the study (to a degree at least). Screener questions may cue respondents into undesirable behaviors, such as stimulating “lapsed users” of a brand to try it again. (While the intent of the study was to interview lapsed users).

The screening interaction may also, inadvertently, create expectations of what the actual research will be like. It can reveal the tone of the study and the type of interview the respondent might expect: Will it be detailed, repetitive and boring, or will it be interesting and personally rewarding to participate? The quality and length of the screener affects the quality of the conversation between the recruiter and the respondent. And this again can have an effect on the respondent’s enthusiasm in participation.

A good screener motivates both the recruiter and the respondent.

In constructing screeners, it is useful for the writer to visualize the exchange between the recruiter and the potential respondent. In most cases, recruiters will have identified the organization they are calling from, and the potential dates for which the respondent might be eligible. To enhance the remainder of this process, a good screener…

  • is not too long, and is relatively easy to administer (i.e., it is put together with some care, checked for skips, proofread, etc.);
  • is conversational, in consumer language, not in industry-speak;
  • is matched to the age, lifestage and status of the respondents (kid questions for kids, respectful questions for business leaders, etc.);
  • identifies major screening criteria early in the interview, avoiding terminates at the very end;
  • minimizes error: has clear, labeled response choices and avoids questions that can only be answered by wild guesses;
  • does not anger, intimidate or bore the respondent;
  • is detailed enough that a trained recruiter “knows” whether the respondent will be productive or not.

Finally, focus groups, and, generally, the entire field of market and opinion research are better known and understood by consumers and B2B respondents than in the past. This is especially the case in the major markets where most of the research is conducted. People tend to know what focus groups are, and that they have to answer a number of questions in order to meet the requirements of the study they might participate in. With this general awareness comes the obligation from us, the industry, to treat people with respect. The MRA code says “Respect people’s time.”

Detailed results

Screener length

A majority of our offices felt that a screener should require about 10 minutes or less to be administered. If the screening interview takes longer, respondents tend to lose interest, become sloppy in answering the questions or get angry. Amazingly, many questions are added to screeners that merely collect data with no bearing on qualification. The value of such questions is usually quite marginal, and, if there are a lot of such questions they can have a definite negative impact on respondent attitudes.

Some of the offices felt that 15-20 minutes is still quite appropriate for completing a screening interview. Some “information only” questions are fine, so long as there aren’t too many. However, if there are instructions to the respondent (e.g., for a homework assignment), this should be planned as part of this 20-minute limit.

Once the screening interview becomes longer than 20 minutes, and especially more than 30 minutes, the entire relationship between the recruiter and the respondent becomes strained. It is poor PR to keep respondents on the phone for a long time. We want to avoid creating an impression that we “got the information we needed and then terminated them.”

So our suggestion is to look for ways to eliminate time-wasting questions and approaches in qualitative screeners. For example, how many products and brands are needed to disguise the client brand? Too often, it seems, researchers are tempted to take an already existing set of questions, or question grid, and throw the entire battery in the qualitative screener. This may produce four questions for all 12 brands listed, while the answer to only one of those four questions is relevant for screening purposes. Similarly, an extensive battery of attitude statements may be included, but how many of these are actually required to determine qualification of the respondent?

If it is desirable to get a lot of product use information or other background data on each study respondent, consider creating a form for respondents to fill out while at the facility. Once at the facility, respondents are willing to give time. We must try to avoid destroying the respondents’ patience while on the phone.

Question order

If at all possible, position the key qualifying questions early in the screener. It is not only efficient in the sense of allowing the recruiters to complete more dialings, it also gives a better sense of how hard or easy a study will be to recruit in general. If terminates come early, respondents don’t feel like they have wasted a lot of time, and will be more willing to be screened for another study in the future.

Questions to avoid: guesses

Questions that are impossible to answer lead to problems in qualitative recruiting. Here are some verbatim responses we received from our offices:

  • “How can anyone remember how many times she has bought Ruffles, Pringles or Lays in the past six months?”
  • “People don’t know how often they have been to a certain store in the past three months, or how many times out the last 10 times.”
  • “You’re sure to get wild guesses and the answers will change as you re-screen (the second time around she may have thought about it).”

Asking people how many times they have done something, especially over a fairly long period of time, like the past three or six months, forces them to guess. Guesses are fine if you are in the quantitative data collection business, but not when you are screening for a particular respondent.

A screener is not a quant data collection tool, but clients often give us quant questionnaire questions to screen people with.

Here’s the problem: If you ask 100 people a certain question for which they have to give a numerical scale rating (any number between 1-10), or a frequency guess, the average of those 100 responses is meaningful and reliable. Ask another 100 similarly chosen respondents the same question and the same average will come out. As long as respondents’ “errors” are random, the average of combined answers will be reliable, which is what quant researchers want.

Unfortunately, the apparent “accuracy” of the average creates the false expectation that every respondent’s answer is “accurate.” But on an individual level, this just isn’t true.

On an individual level a numerical rating or frequency guess is not reliable. A respondent may give you a “6” on a rainy day, and an “8” on a sunny day. In qualitative screening, the numbers approach is unreliable, and should be avoided as much as possible.

What to do instead? We suggest using simple scales, with verbally labeled scale points. People will remember their perceptions and emotions better if you don’t give them too many choices:

“One of my favorites.” “It’s OK.” “Poor.”

“Definitely.” “Maybe.” “Not.”

“Agree.” “Disagree.” “True.” “False.”

“More than once week.” “Once a week.” “Less than once a week.” “Less than once a month.”

The point is to keep scale levels simple. Just enough to define whatever your screening criteria is: committed user vs. marginal or lapsed user, etc.

If we give respondents too many options, like seven or nine different levels of purchase interest or agree-disagreement (like in “definitely,” “somewhat,” “slightly,” “I might or might not,” etc.) it becomes mush in respondents’ minds. A greater number of scale levels do not make it easier to identify trier-acceptors and trier-rejectors. To the contrary, it makes matters more confusing. The respondent will give an answer, but you cannot be sure you’ll get the same answer in re-screening.

Articulation questions

A screener definitely should include more than just closed-ended questions. Open-ended questions allow recruiters to identify respondents to be avoided: people who are uncooperative, unable to express themselves, have language problems, thick accents, speech impediments, etc. At the same time, some open-ended questions can build positive expectations towards the real interviewing event: that there is an interest in the respondent’s personal opinions and feelings.

Articulation questions go beyond simple open-ended questions. They are, generally, a good idea, especially in consumer studies. Articulation questions can reassure researchers that respondents will be productive, cooperative, not prone to shy away from answering questions that are less than totally predictable, and able to communicate their personal experiences and perceptions.

The issues with articulation questions have to do with their purpose: Are they intended to ensure articulateness of the respondents, or is the client looking for highly imaginative and creative respondents?

A number of commonly used articulation questions appear to be geared more for identifying highly imaginative or creative respondents. They can be quite off-putting to others. Perfectly articulate respondents may be stumped by questions such as:

  • How would you describe a sunset to an alien?
  • Give me 10 (or 18 or 30) ways to use a paper clip (or brick or rubber band).
  • Which celebrity (or person in history) would you invite to dinner and what would you talk about? (This is especially hard when it involves a dead person.)

There may be many circumstances where it is desirable to have productive, verbally expressive - but not necessarily highly creative - respondents. If this is the case, we suggest using articulation questions that make respondents talk about their personal experiences and preferences, rather than about something totally unexpected, and out of the blue.

It is also a good idea to match such questions to the respondent’s age or life stage. Ask women about shopping, or their vacations; men about hardware stores, their cars; teenagers about music and movies. And have a follow-up question like, “What was the highlight for you?” or “What is most important to you?”

Here are some verbatim responses from our offices:

  • “An articulation question should be one that (almost) everyone feels comfortable answering - one that does not require deep thought on a different topic. Respondents may become intimidated. (Think about where the respondent is, on the phone, with kids running around!)”
  • “The better articulation questions are those related to the topic of the screener. It is difficult to switch gears if a respondent has spent 10 minutes talking about soap products, and we suddenly ask them about their favorite celebrity and what they would want to talk about at dinner with him/her.”
  • “Avoid what we call the Barbara Walters question: ‘If you were a tree what would it be?’”

For consumer studies we do recommend using articulation questions, but make them fit the purpose. Distinguish between the need for productive, well-spoken people vs. the need for creative individuals. For B2B respondents, on the other hand, we are not so convinced that articulation questions are required. B2B respondents can be expected to be knowledgeable about their field, and, as a result, able to say lot about it.

Finally, regardless of whether an articulation question is used or not, the trained recruiter must use judgment as well. On the basis of the conversation with the potential respondent he or she must know whether the respondent is likely to be productive, can carry a conversation, is cooperative, has no speech impediments or heavy accents, etc.

Homework assignments

The idea of asking respondents to do something before the interviewing event, to focus their attention on the topics and issues to be covered, is gaining popularity among researchers. While not technically part of the screener, the explanation of a homework assignment represents an important and time-consuming part of the screening interview experience, both from the recruiter and the respondent’s perspective.

The most fundamental thing to remember when plans call for a homework assignment is to allow sufficient time for recruitment and for respondents to complete their assignment before the main interviewing session. If items need to be mailed or purchased, or a store must be visited and shopped, recruitment must be completed well before the interviewing event to allow for such activities. And, when respondents are recruited a week or more before an actual interview, the likelihood of conflicts and cancellations goes up, so this must be incorporated in the planning for over-recruits.

Try to match the assignment to the age and personality of the respondents. Generally, women and young children are more conscientious and reliable than men and teenagers in completing their homework.

“Men especially are most likely to cancel rather than complete an assignment, particularly if they have to write down what their favorite gray pants mean to them, or make a collage about their favorite toilet tissue. Asking them to bring in a copy of their phone bill is about the extent of what they will do.”

Assume that respondents don’t follow directions well. It is very important to keep everything as simple as possible. Reading a page-and-a-half of instructions over the phone is not only wasteful, but not understandable.

Make the assignment interesting. If it is a tedious task, respondents will not give it much effort, and the results will be disappointing.

Some typically successful homework assignments: trying a product; shopping trips (if not too many); clipping photos, articles; watching a videotape; doing something on the Internet; collages, with the right people (not 18-24-year-old males); bringing in an item or several items.

Some problem homework assignments: diaries (especially ones that take more than a few days to complete); keeping track of/recording all activities during a day; complicated or multiple shopping trips; photo and video diaries (although this technique may work with some).

Some more comments from our offices:

  • “Forget about lengthy diaries where they need to record everything they do in a typical day, week or month. They hate these and don’t do them diligently. We had one where we couldn’t get hardly anyone to finish it because it was too tedious. The best homework assignments are easy collages, picture-taking, a couple of questions they need to answer, or bringing things with them.”
  • “They always seem to enjoy doing something on the Internet. If you have a technology-based assignment, make sure you have someone available to help with tech support.”
  • “Never make a homework assignment optional.”

Ethnographies

Screening for ethnographies (or, in general, interviews away from a facility) is not very different from other types of studies, except that certain things need to be planned carefully.

It always takes longer than anticipated to get from one interview to the next, so it is very important to allow for enough time between appointments. Directions are often not completely accurate, and traffic conditions can be surprisingly poor on local roads at all times of the day.

It is a good practice to let the respondent know the name(s) of the person(s) she or he will be interviewed by. And those persons should be prepared to identify themselves with a photo ID. Also, it is desirable to let the respondent know ahead of time that the project team will be making photos or a video of the interview.

Finally, never send a man (or men) to a woman’s home, unless there is a woman on the interviewing/observation team.

Crucial link

Screeners and the screening process should get the attention they deserve. They are a crucial link in the success of a project. Often, screeners are prepared in too much of a hurry, with not enough thought, not enough design, not spell-checked, and not checked for flow after the last revisions to an earlier draft.

Use some common sense, and, in developing a screener, visualize what the conversation between the recruiter and the potential respondent will be like.

As researchers, we are asking people who don’t owe us anything to participate in our studies. We offer to compensate respondents for their time, once they qualify and are able to attend, but not for going through the qualification process.

If we make this process unnecessarily unpleasant or difficult, there’s a strong possibility that reasonable, well-qualified respondents will give up and decline to participate in a study. This is most unfortunate. It potentially eliminates people who should be included in the research. Moreover, it wastes money, time and the enthusiasm people feel for participating in research, which often can be a rewarding experience.