Lingering differences

Editor’s note: Allen Hogg is director of Internet planning and analysis for Millward Brown, Naperville, Ill.

A check of the Quirk’s Marketing Research Review online article archive shows it was a decade ago when the U.S. research community first began debating the merits of using the Internet as a data collection method. Not long after, reports began being published from parallel surveys conducted simultaneously on the telephone and on the Web.

One might expect that in 2005, researchers would have a solid grasp on how online respondent populations are likely to differ from those reached through telephone interviewing - and how study results are likely to be affected by the different survey methods.

There remains, however, uncertainty among many researchers about how their study results might change when switching from phone to online data collection. Some research companies tout being able to project from online samples to the general population, leaving clients disappointed when they find the demographics of Internet survey respondents differ from the known profile of their target populations and the responses to their Web questionnaires don’t resemble those obtained in previous phone surveying.

One reason definitive knowledge has not emerged is that the populations that can be reached through commercial sample providers have continued to shift as Internet penetration has increased and online panels have evolved their recruitment methods. The populations that can be reached through telephone surveying might also be changing, as ever-greater percentages of the public decline to participate in research, screen calls, or cannot be reached because they have shifted to exclusive use of mobile telephones.

Population and methodological differences

Millward Brown began its parallel surveying in 1997. Summarizing findings across parallel studies conducted during just the last few years, however, probably sheds better light on how Web and phone results are likely to differ in 2005. An investigation of two dozen side-by-side research efforts Millward Brown conducted in the U.S.  between 2001 and 2004 shows that, despite the shrinking percentage of Americans who do not have Internet access, the population that completes a Web survey can still differ in important ways from phone respondents if appropriate controls aren’t put in place to prevent such deviations.

Researchers should also expect persistent changes in responses driven primarily by the difference in data collection method. It has long been recognized that a person will respond in different ways to a self-administered and an interviewer-administered survey. There can, for example, be a tendency to present more “socially desirable” responses when speaking with interviewers. Web surveys also allow for presenting visual stimuli in ways phone surveys cannot, which will have an effect on results.

It is hoped that Millward Brown’s experience with parallel Web and phone research will be relevant to a large number of researchers. The organization’s side-by-side surveys have centered on consumer experience with brands and marketing communications in a number of different product categories, including foods and beverages, home and personal care products, clothing, automobiles, health care devices, telecommunications and Internet sites and services. Consequently, screening requirements have varied considerably from study to study.

The parallel studies have also involved a number of different online sample suppliers. Millward Brown does not rely on a single online sample partner but instead helps clients choose the most appropriate and cost-effective source for each Internet study that Millward Brown programs and hosts. The findings here include results obtained from members of the Lightspeed Consumer Panel, Survey Sampling International’s SurveySpot panel, and the Greenfield Online panel, as well as from respondents going to the

Opinion Place
 survey site. In some of the parallel studies, no single provider could be found to meet the study’s ongoing online sample needs, so Millward Brown surveyed individuals from multiple sample vendors in order to keep the respondent mix consistent over time.

Demographics

An unfiltered set of respondents from any of these online sample sources will probably not match the demographic profile of U.S.  adults. Most obviously, those who join panels or go to survey sites are predominantly women. It is not, however, difficult to get an equal number of men and women to take any particular survey, if that is what is desired. A panel will simply send out e-mail invitations to the appropriate proportions of men and women, and Millward Brown will use a quota management system to ensure that the right balance between the sexes is maintained among those completing the questionnaire. In none of the parallel studies that we have hosted in the past few years has the percentage of female respondents been significantly greater online than it was on the phone.

On other demographic characteristics, it can be easy for subtle differences in distributions of phone and Web respondents to emerge if controls aren’t set up carefully. For example, without quota controls in place, online respondents can tend to cluster more around the middle of the adult age scale than phone respondents do.

Online respondents can also cluster more around the middle of the household income scale than phone respondents. (Even though people with high incomes disproportionately have online access, members of households with annual incomes greater than $80,000 can actually be under-represented among members of online communities who agree to take surveys for what are typically nominal incentives.) In many cases, education quotas will be more appropriate than income quotas for controlling online respondents, as differences in education classifications between online and phone respondents are typically greater than differences seen in reported income levels - not surprising considering that a certain degree of literacy is required to complete online surveys.

Media usage and psychographics

Of course, the greatest difference between respondents completing a Web questionnaire and those completing a telephone interview is that all the online respondents must have access to the Internet and be willing and able to complete the survey. Even when telephone respondents have been restricted to those with Web access, online respondents will tend to report greater usage of the Internet.

Although some researchers have expressed concern that online respondents are using the Internet in lieu of other media, we have not found evidence to support this. When media usage has been examined in parallel studies, a higher percentage of online respondents have indicated watching television in the prior week, with more reported hours of viewing than phone respondents. There may be a stigma associated with heavy TV viewing that makes phone respondents less likely to admit to interviewers how much television they watch, but online respondents have also reported more magazine and newspaper readership than phone respondents.

Even if phone and online profiles match perfectly, there still might be psychographic differences between the populations taking phone and Web surveys. A 23-year-old single white male with a college education and a full-time job who has joined an online panel or gone to a survey site might express opinions and exhibit behaviors different from his counterpart who takes a call from a marketing research firm and agrees to complete a survey - and both of these people might be different from their counterpart who does neither.

Demographic weighting has not, for example, eliminated the tendency for online respondents to be more likely than telephone respondents to favor niche and trendy products in some categories. Compared with telephone respondents, online respondents might also be more variety-seeking. For example, a greater percentage of Web respondents have said that they are willing to buy new brands of products that they had not heard of before entering a store.

Brand awareness measures

Because most of the parallel surveys conducted by Millward Brown have been continuous tracking studies designed to provide feedback on the effectiveness of marketing communications and its impact on perceptions of the client’s brand, key metrics typically include brand awareness.

Although the correlation coefficient between unaided awareness of products among online respondents and unaided awareness of those same products among phone respondents has been a very high .94, unaided awareness online has averaged about four percentage points higher than unaided awareness on the phone. This difference is likely driven by the questionnaire settings. Although phone interviewers are trained to probe for additional brands when respondents stop volunteering them, respondents might be uncomfortable keeping the phone silent for too long as they rack their brains to think of, say, another model of sport utility vehicle. The online respondent facing a screen of blank text boxes, on the other hand, might see it as a challenge to fill them up and keep thinking about the question longer.

In terms of aided awareness, scores from online respondents are again, on average, four percentage points higher than those of phone respondents, and the correlation coefficient between scores from the two methods is again very high at .90. There has, on average, been no difference seen in total awareness scores for well-known products with awareness levels greater than 90 percent. Instead, the online edge in total awareness is primarily seen for products at a middle level of awareness - and may be driven in part by Millward Brown’s standard to use logos or package shots instead of or along with brand names when asking respondents if they are aware of products.

Marketing communications

When respondents have been asked directly whether they have seen any advertising or marketing communications for particular brands, the correlation coefficient between phone and online awareness scores has again been high: .92. On average, however, the awareness scores among online respondents has been seven percentage points higher than the awareness score among phone respondents.

When respondents have been asked about awareness of advertising or communications through individual media, the results have varied. As might be expected, online respondents have been half-again as likely as phone respondents to say they are aware of Internet communications about particular brands. Online respondents have also been more likely to indicate awareness of communications through direct mail and, to lesser extents, radio, magazines, newspapers and television. Telephone respondents, on the other hand, have been more likely to indicate awareness of communications about a brand through such low-tech means as outdoor advertising and word-of-mouth.

Respondents also are often asked if they are familiar with specific television advertisements and if they can recall the brand that was advertised. In the parallel studies examined, the percentage of online respondents correctly naming the brand that was advertised after saying they have seen the commercial was, on average, 39 percent, compared with just 27 percent for phone respondents. A prime driver of this difference is likely the fact that phone respondents have been read verbal descriptions of the ads, while Internet respondents have typically been exposed to visual stills from that commercial.

Brand imagery

Another key feature of many of Millward Brown’s parallel surveys has been a section asking respondents to indicate which of a series of statements apply to the client’s brands, as well as to the brands of its key competitors. On these items, people taking phone surveys have generally responded more positively than those completing online questionnaires. For the best-known client brand in each study, phone respondents have, on average, said 46 percent of the statements read apply to that brand. By contrast, those completing surveys on the Internet have, on average, indicated that only 38 percent of the statements listed apply to the best-known client brand.

Some commentators have talked about the “intense candor” of Web respondents, even when compared with individuals responding by other self-administered means, such as mail surveys. This could be one factor driving down the number of times respondents will indicate that a statement applies to the brand.

The layout of the survey on the computer screen is, however, likely also contributing to the differences seen. On the phone, respondents are asked about the statements one at a time, with probing if a respondent associates just one brand to a particular statement. On the Internet, these statements are often presented along with the brands that could be chosen in a single grid. Online respondents asked to “check all that apply” in such situations might be inclined to report fewer brands than phone respondents. Indeed, negative statements as well as positive statements are more frequently associated with brands by telephone respondents.

Despite the absolute differences in number of endorsements, it should be noted that there is still a high correlation (.86) between the percentage of respondents associating a statement with a particular brand on the phone and the percentage of online respondents associating that same statement to that particular brand. When brand imagery responses are analyzed to determine strengths and weaknesses of particular brands, it is a very rare case when something that is identified as a substantial strength of a brand by one set of respondents is deemed a weakness of the brand by respondents whose views were gathered through the other data collection method.

Measuring “reality”

Identifying that phone and online survey data can be different does not, of course, mean that one data collection method is better than the other. Although telephone surveys have been a trusted tool of the research industry for many decades now, when results of Internet questionnaires differ from phone study findings, it could be that the online responses are coming closer to estimating the actual opinions and behavior of members of the population of interest.

To arrive at such a conclusion, it is necessary to have information about the target population that is known through something other than the survey process. Millward Brown clients, for example, will sometimes have sales data indicating the market shares of their own and competitive brands. In several of the parallel tests investigated, this share data has been compared to reported purchase and awareness figures from the surveys.

When this has been done, the correlation coefficient of a brand’s share and its performance on the selected online survey measure has been greater than the correlation coefficient for the comparable phone results as often as not. Neither data collection method has emerged as the consistent winner when efforts have been made to line up study findings with what is otherwise known of the reality the survey has been designed to estimate.

In other words, the fact that results from online questionnaires are likely to differ from parallel phone survey findings does not mean that they are somehow worse. For some sorts of questions, Web surveys might inspire more thoughtful, honest responses - and the use of visual prompts can spur more accurate recollections. Given cost and timing advantages that online data collection typically provides, researchers wishing to survey U.S.  populations that can be reached through commercial online sample providers might be hard-pressed to justify avoiding the Internet at this point in its history.

When tracking studies or previous survey waves have been conducted on the telephone, however, it still is a good idea to engage in a period of parallel research so that changes resulting from the shift in methods can be separated from real changes in the population that have taken place over time. If the online sampling has been designed carefully, researchers should then be able to proceed forward with results from the initial Web surveying serving as a new benchmark.