Editor’s note: Bill MacElroy is president of Socratic Technologies, Inc., San Francisco.

Over the years, the influence of interviewer bias has been well documented. In essence, this phenomenon accounts for some of the differences found in respondent answers to identically worded questions. One interviewer will tend to have consistently more positive responses from certain groups of respondents; another may have consistently negative or uncooperative outcomes. Much attention has been given to how likable the interviewer is; whether he or she is good-looking, articulate, of a compatible gender, and so on.

One area that we have begun to suspect may be of even more significant influence is the degree to which the presence of any human contact affects respondents’ perceptions that their answers are confidential and anonymous.

We feel that the privacy of the interviewee’s environment may play a key role in the types of answers given because the change in the characteristics of answers doesn’t stop when no specific interviewer is present. Even within self-administered interviewing situations, the degree to which the respondent feels "secure and alone" appears to produce more candor. We have called this observed phenomenon the anonymity gradient.

Over the past three years, we have had several opportunities to run side-by-side studies in which the same questions were asked using different modes of field methodology (e.g., one-on-one interviewing, CATI telephone, paper and pencil, disk-by-mail and Web-based interviewing). As we examined the answers to identically worded questions, a curious pattern began to emerge. Increased human presence had the distinctive effect of producing kinder, less frank answers. This difference was also noted between paper and pencil surveys conducted with and without other people in the area.

The most candid answers (based on the degree to which people reported known problems, complained about service that was known to be a concern and gave in-depth responses when probed for areas that needed improvement) came from people using their own personal computers. Researchers have reported that when people use computers they tend to enter a "cool and immersive, womb-like environment" in which the level of engagement can produce exaggerated levels of perceived privacy. This is probably analogous to how people feel totally alone in their cars, when in fact they are surrounded by hundreds of other people who can clearly see them through the glass. Once again, however, the presence of others tends to influence answers. The person at home working on their own computer gives more candid answers than those taking the same questionnaire on a computer in a research facility.

The anonymity gradient can be thought of as a pattern of candor that changes with the perceived level of privacy. This relationship is shown in the chart.

By itself, the anonymity gradient might be an interesting anomaly, but without much practical value. We have, however, found some distinctive characteristics that may be helpful to people as they are planning conversions from certain forms of research to others. This is particularly important if your company has been tracking satisfaction, performance, problem resolution, and other similar topics using telephone, paper and pencil, or one-on-one interviewing techniques. There can be an unpleasant shock to the system when, after many periods of hearing from your customers that they are completely satisfied with no problems to report, you suddenly find out that they are less satisfied and have a whole list of demands for improvement. You may encounter this when converting traditional methodologies to newer technologies.

But this difference isn’t necessarily a bad thing. In fact, unless there is some type of vested interest in keeping responses artificially high, the more anonymous technologies may give you more accurate data. Several of the programs we have run have shown that the more candid answers are not the results of some latent "research rage," but rather tend to be more reflective of the real world.

For example, most data related to purchase interest tends to be overstated when actual purchases are tallied. Although we haven’t had the benefit of tracking actual results, we suspect that some data related to purchase interest collected using a more anonymous technology may be closer to what will really happen when the product actually ships.

The same is true for satisfaction figures. When people are called on the phone and asked to give an interviewer general satisfaction ratings, the percentage of people reporting high degrees of satisfaction have been found to be 5 percent to 20 percent higher than the same questions administered by disk-by-mail. Which is right? The answer may lie not in ratings but in the analysis of open-ended responses. We often know that there are certain areas that frustrate people and/or cause problems. When electronic data collection methods are used, these areas are far more likely to be mentioned. In addition, the volume of typed versus spoken data reveals that people are willing to take a lot more time describing the situation, suggesting improvements, giving examples, and making competitive comparisons when it’s just them and their keyboard.

The anonymity gradient is something that we believe exists and may partially explain the difference in answers between certain types of studies. But before we can be sure of the degree of its influence, more studies should be done. As technology is used more frequently as a tool of research, studies into modified behavior in differing test situations will become increasingly important.