Editor’s note: Scott Dimetrosky is a senior associate, and Sami Khawaja is president, at quantec, LLC, a Portland, Ore., economic consulting firm. Phil Degens is an evaluation coordinator at the Northwest Energy Efficiency Alliance, a non-profit group of electric utilities, state governments, public interest groups and industry representatives.

While many years of research have established best practice approaches for mail and telephone surveys, the burgeoning field of online research has raised a number of fundamental questions.

  • What type of layout - screen vs. scrolling - works best?
  • Do incentives have any impact?
  • What other techniques will minimize sources of survey error?
  • How do the results of online surveys compare with other survey modes?

Most importantly, many researchers are still wondering if online survey research is even a meaningful option or is it a mere “convenience sample” approach.

With over 149.6 million Internet users in the United States (according to Nielsen//NetRatings) and the increasing popularity of broader bandwidth it’s clear that online survey research will only become more and more prevalent. The question, therefore, will not be if one should conduct online surveys, but how best to conduct online surveys.

There have been a number of recent studies that have sought to examine many of these unanswered questions concerning online research. These studies are gradually helping researchers better understand these issues and establish a best practice approach for online survey research. This article summarizes some of the more notable findings from recently published studies, and should assist any researcher considering conducting a survey online. Some questions, of course, will remain outstanding until more definitive research is available.

Online survey context: What are the sources of error?

Dillman and Bowker (2000) identify four main sources of survey error and discuss practices that will minimize these errors. The papers that we reviewed examined these practices in even greater detail, often presenting powerful evidence that will help guide the researcher. The errors to be aware of are:

  • Coverage error: The result of all units in a defined population not having a known nonzero probability of being included in the sample drawn to represent the population.
  • Sampling error: The result of surveying a sample of the population rather than the entire population.
  • Measurement error: The result of inaccurate responses that stem from poor question wording, poor interviewing, survey mode effects and/or some aspect of the respondent’s behavior.
  • Nonresponse error: The result of nonresponse from people in the sample, who, if they had responded, would have provided different answers to the survey questions than those who did respond to the survey.

What survey mode is best to use? Table 1 provides a general comparison.

Table 1 General Comparison of survey modes

Item    

Mail    

Telephone    

Web    

Source    

Overall Response Rate    

Good, with proper Incentives    

Good, But Increasingly More Difficult    

Good With E-mail Invite, Poor Otherwise    

Kwak and Radler, 2000; Guterbock and Colleagues, 2000; Medlin, Roy, and Ham Chai, 1999; Schaefer and Dillman, 1998    

Item Response Rate    

Good    

Good    

Excellent ForScreen Layout, Poor To Good For Scroll Layout    

Kwak and Radler, 2000; Medlin, Roy, and Ham Chai, 1999; Schaefer and Dillman, 1998; Vehovar and Manfreda, 2000; Dommeyer and Moriarity    

Self-Selection Bias    

Minor    

Minor, But Increasingly A Problem    

Minor Using Targeted E-mail Invite. Considerable If Simply Posted On Web Page    

Manfreda, Vehovar, and Batagelj, 1999; Askew, Craighill and Zukin, 2000; Krotki, 2000; McCready, 2000    

Cost    

Expensive For Large Samples, Better For Smaller Samples    

Less Expensive For Larger Samples    

Normally the Least Expensive, Particularly For Large Samples    

Bauman, Jobity, Airey and Atak, 2000; Aoki and Elasmar, 2000    

Turnaround Time    

Poor    

Good    

Excellent    

Aoki and Elasmar, 2000; Kennedy, 2000; Tedesco, Zukerberg, and Nichols, 1999    

Data Entry Accuracy    

Requires Keypunch Verification    

Good With CATI System    

Excellent With Proper Layout , Plus Can Use Pop-up Verification    

Kennedy, 2000; Tedesco, Zukerberg, and Nichols, 1999    

Length Of Time For Respondent to Complete Surveys    

Slow    

Reasonable    

Can Be Fast    

Aoki and Elasmar, 2000;    

Open-ended Responses    

Good    

Good    

Uncertain: Research Has Found Contradictory Results    

Farmer, 1998; Totten, 2000; Aoki and Elasmar, 2000; Schaefer and Dillman, 1998; Kwak and Radler, 2000; Ramirez, Sharp, and Foster, 2000    

Conducting online surveys

E-mail attached vs. embedded: The literature review indicated that embedded surveys are far superior to attached surveys (Dommeyer and Moriarty, 2000). The embedded e-mail survey, despite its formatting limitations, can be answered and returned by the most unsophisticated of e-mail users, and, therefore, can appeal to a broader audience.

E-mail only vs. e-mail with Web link: Surprisingly, there were no studies to compare e-mail only versus e-mail invitations with a link to a Web survey. This could be because of a lack of a sophisticated data entry e-mail survey software system (and the advances of data collection using the Web). It could also be that, as more users connect to the Internet with higher bandwidth, the link in the e-mail to the Web survey will appear seamless, and thus researchers are focusing on future technology.

Screen vs. scroll layout: Research has indicated that there is no difference in dropout rates between scrolling and screen-based (putting just one or a small series of questions on one screen) surveys. However, screen-based surveys had lower item nonresponse (Vehovar and Manfreda, 2000). A screen-based approach also allows skip patterns to be fully automated (Tedesco, Zukerberg, and Nichols, 1999). Other papers point out that, in a scrollable or static Web design where all the questions are displayed on a single HTML page, the respondent can make more informed decisions about participation based on the content of the survey (Crawford and Couper, 2000).

Use of logos/fancy designs: The literature indicates that using logos (graphics) can reduce item nonresponse, as the picture may help “trigger” a respondent’s memory, particularly in awareness questions (Vehovar and Manfreda, 2000). However, the use of excessive graphics or logos can lead to an increase in overall survey nonresponse, presumably due to slow downloading (Vehovar and Manfreda, 2000; Dillman, Tortora, Conradt, and Bowker, 1998). Careful survey designers must therefore consider the connection speeds of their target respondents and gradually increase the use of logos/fancy designs as higher bandwidth becomes more common.

Alignment of text/general layout: Research has found that the alignment of questions (left- vs. right-justified) and the location of answer categories (left vs. right) had no impact on results (Dillman and Bowker2, 2000). However, the literature does indicate that something should be done to minimize the presence of white space on the screen (which respondents found confusing) and that the simplicity of the format and ease of navigating through the document are of paramount importance to respondents (Dillman and Bowker2, 2000). The use of Java scripts or other advanced programming techniques can also seriously limit the number of people that can take the survey, or detract from the survey credibility (Dillman and Bowker, 2000; Kennedy, 2000).

Invitations/reminders: The more active the invitation is (e.g., pop-up windows or e-mail invites with hyperlinks), the better the response; the more passive (e.g., icons and banners), the self-selection factor is more significant, while response decreases (Bauman, Jobity, Airey, and Atak, 2000). In addition, reminders were generally found to increase response rates, often resulting in spikes in the response rate (Clark and Harrison, 2000). More frequent reminders were found to lead to an increase in response rates (Crawford and Couper, 2000). Some researchers also found no statistical difference in response rate between those getting an email reminder versus those getting a phone reminder (Clark and Harrison, 2000).

Incentives/lotteries: Per-respondent incentives, which give the best response rates for mail surveys, are difficult and costly to administer (logistically) for online surveys (Bauman, Jobity, Airey, and Atak, 2000). Larger, more valuable prizes such as Palm Pilots were also found to help increase response rates (Bauman, Jobity, Airey, and Atak, 2000). Like mail surveys, prizes should be appropriate for the audience and not detract from the seriousness of the study (Bauman, Jobity, Airey, and Atak, 2000). There were no studies of “instant winner” prizes, although these are becoming more popular.

Open-ended questions: The research seems to indicate contradictory results for the use of open-ended questions in online surveys. A number of papers (Farmer, 1998; Totten, 2000; Aoki and Elasmar, 2000) reported that online open-ended responses were not as informative as other survey modes. Other papers, however, reported that online surveys gave preferable open-ended responses in terms of the level of detail and the number of words (Schaefer and Dillman, 1997; Kwak and Radler, 2000; Ramirez, Sharp, and Foster, 2000).

Progress indicators: While a progress indicator was hypothesized to increase the response rate, one research team found that the progress indicator captured only the number of questions left, not real time (e.g., open-ended questions), and therefore was ineffective (Crawford and Couper, 2000). No research tested the effect of telling users the number of questions in advance, although those who were told the survey would take less time (regardless of actual time) were more likely to complete the survey (i.e., dropouts were not higher for surveys that took longer than the invitation indicated) (Crawford and Couper, 2000).

Password access: Password access is important for surveys in which there is a risk of respondents completing the survey more than once. Some research indicated that passwords may even increase the response rate, particularly when they are simple and are easily cut and pasted into the survey entry page (Crawford and Couper, 2000).

Weighting responses to represent population: The literature is somewhat contradictory concerning the ability of researchers to weight online surveys to match target populations. Some researchers have adamantly declared that, even with weighting, the self-selection bias and coverage error (lack of those with online access) severely undermine the meaningfulness of online survey results (Manfreda, Vehovar, and Batagelj, 1999; Askew, Craighill, and Zukin, 2000). There is, however, a commercially available Web panel that carefully selects participants and provides them with hardware and software (through WebTV) that has been found statistically representative of conducting a random-digit-dialed (RDD) CATI study (Krotki, 2000; McCready, 2000).

Introductions: The survey introduction in the Web context is best served by brevity; some researchers found that participation increased when they converted the traditional cover letter approach to a succinct intro with FAQs (Bauman, Jobity, Airey, and Atak, 2000). Researchers also recommend that the first question be short and easy to fill in, so that potential respondents are not discouraged (Dillman and Bowker, 2000).

Response rates: Several studies comparing Web surveys to mail methods find lower response rates for the former. For example, Kwak and Radler (2000) obtained a response rate of 42 percent for mail and 27 percent for Web on comparable samples of college students. In a similar study, Guterbock and colleagues (2000) obtained response rates of 48 percent for mail and 37 percent for Web. In a survey of computer software companies in Australia, Medlin, Roy, and Ham Chai (1999) obtained a response rate of 47 percent for mail and 28 percent for Web. Similar differences have been found for e-mail surveys (Schaefer and Dillman, 1998).

Use of sort tests, concept tests, package testing, and copy development: General Mills has had positive results using the Internet for consumer product testing. It has investigated over 100 validation cases and found generally minor differences between online and offline research. The main differences are from in-person vs. self-administered study designs, suggesting that all stimulus and questionnaire materials are clearly understood without interviewer prompting (Peterson, 2000).

For more information

An excellent source of online survey research information, including an extensive bibliography with links to many of the papers cited here, can be found at www.websm.org, provided by the University of Ljubljana, Slovenia. Additional information may be obtained from the Interactive Marketing Research Organization (IMRO), which promotes online scientific and ethical research practices (www.imro.org).