Capture their interest

Editor’s note: Jamin Brazil is CEO of Decipher Inc., a Fresno, Calif., research firm. Chandra Mullins is the firm’s assistant project manager and Jayme Plunkett is president. Aaron Jue is a former research manager at Decipher.

It’s no surprise that more and more companies are using Web-based surveys to conduct market research. Market researchers feel increasingly confident that they can reach the audience they need via the Internet as the online population grows. According to Internet World Stats (www.internetworldstats.com/index.html ), Internet usage has doubled from 500 million users to over a billion in the past five years. Web-based surveys also offer cost- and time-saving advantages over traditional paper-and-pencil-based questionnaires. Instead of taking weeks to field a survey, now quotas can be filled in a matter of days. Additionally, there is no longer the need to deal with costly and error-prone data entry practices to convert your data into statistical packages such as SPSS.

Given the advantages, it’s no wonder market researchers have been quick to adopt Web-based surveys. However, it is important to take a step back and evaluate some key issues. First and foremost: How does the use of this new medium for conducting surveys affect the quality of our research? A key concern is with dropouts and the resulting time, cost and data integrity implications. If it is found that few respondents are finishing the survey before they drop out, then extra energy must be spent to recruit more sample. Also, Web-based surveys with high dropout rates experience higher potential non-response error and consequently greater concerns over the accuracy of the data. How does the resulting non-response error impact data quality?

Incentives have long been used with traditional data collection methodologies, however their impact on Web-based surveys remains unclear. Is it best to use a guaranteed cash incentive or a cash prize drawing? How are data quality and dropout rates affected by incentives?

Web-based surveys also provide more choices with the cosmetic appearance of the questionnaire such as color, shading and the use of HTML tables. Do aesthetics matter? If so, what impact do they have?

Decipher Inc. conducted an online study aimed at measuring the effect of survey design, cosmetic elements and incentives on three key measures: completion rate, data quality and respondent satisfaction. The purpose of this research was to further enhance knowledge about Web-based survey design and share this knowledge with the research community.

The survey was conducted using a domestic customer list provided by eBay Inc. Over 1,900 eBay members participated in a seven-minute online survey.   The recruits were sent an e-mail invitation containing a link that directed them to the survey.

Our study employed four parallel cells:

   Survey Design  Incentive
 Cell 1  Plain  None
 Cell 2  Fancy  None
 Cell 3  Fancy  1 in 500 chance to win $1,000
 Cell 4  Fancy  Guaranteed $2 cash to first 500 qualified completes

 As can be seen from the setup, a comparison between Cell 1 and Cell 2 measured the effect of a plain versus fancy survey design. Color, the use of tables and the right-aligned buttons distinguished the fancy from the plain survey design (see Figure 1).

A comparison between Cells 2 and 3 or Cells 2 and 4 measured incentive effects. Finally a comparison between Cells 3 and 4 measured the effects of the different types of incentive: a cash prize drawing or a smaller, guaranteed cash incentive.

There is always the chance that respondents would complete the survey at a higher rate, based on the type or value of the incentive offered. In order to reduce the possibility that one incentive is viewed as more valuable than the other, both of the incentives offered in Cells 3 and 4 had equal expected values of $2. Expected value is calculated as the probability of obtaining incentive x incentive value. For example, the expected value for a 1 in 500 chance to win $1,000 is calculated by multiplying the probability of receiving the incentive (1/500) by the value of the incentive ($1,000).

Results

Completion rates

The first objective of this study was to measure the effect of survey design cosmetics and survey incentives on completion rate. The completion rate is measured as the number of respondents/number of people who viewed the first page. Respondents are defined as the number of qualified completes plus the number of non-qualified completes. The results, shown in Figure 2, indicate that the aesthetic appearance of the survey had no measurable impact on completion rate. In both instances, approximately 77 percent of respondents completed the survey. The use of an incentive, however, affected completion rate in a number of ways. First, while the type of incentive did not affect completion rates, respondents who were offered either of the two incentives were approximately 10 percent more likely to complete the survey.

Incentive offers also significantly reduced the dropout percent near the beginning of the survey. A detailed examination of the results suggests that a majority of the dropouts occurred during a critical period of the first 90 seconds. Figure 3 examines the cumulative dropout percentage (number of people who did not complete the page divided by the number of people who saw the page) by cumulative time spent taking the survey (in minutes). For all cells, dropouts occurred primarily over the first 90 seconds after a respondent entered the survey. Approximately 11 percent of respondents who were offered an incentive and 16 percent of respondents who were not offered an incentive left the survey during this time period. Respondents who were offered an incentive were significantly less likely to drop out after the first page than respondents who were not offered an incentive (see Figure 4).

However, while the presence of an incentive significantly reduced the number of respondents that dropped out of the survey in the first minute-and-a-half, incentives had no impact after this critical period. Over the remaining six minutes, there was only an increase of approximately 8 percent in the number of dropouts. Respondents who were offered an incentive dropped out at a similar rate to those who were not offered an incentive.

The results shown in Figure 3 also highlight the effect that personal questions have on respondent dropout rate. The final question in the survey asked respondents for personal contact information, including first/last name, e-mail address and phone number. Spikes in the dropout rate occurred for each of the four cells after respondents viewed this question. On average, the cumulative dropout rate increased by 2 percent following this question. This increase accounts for 25 percent of the total increase in dropout percent in the last six minutes of the survey.

In sum, these results suggest a number of things about completion rates. First, researchers wanting to increase completion rates should consider offering an incentive, as the presence of an incentive significantly reduced dropout rates. Contrary to prior studies that indicate that respondents prefer guaranteed incentives (see Westergaard, November 2005), these findings suggest that when the expected values of the incentives are equal, the guaranteed incentives do not affect completion rate differently than sweepstakes incentives. These results also demonstrate the need to quickly engage respondents within the first minute-and-a-half, as well as to make the respondent aware of the incentive offer as soon as possible. Once respondents commit to the survey beyond the first minute-and-a-half, they are likely to stay through completion. This is true regardless of the presence of an incentive.

Data quality

We also set out to examine the impact that survey design and incentives had on data quality. To measure data quality, we linked survey responses back to pre-existing information provided by eBay. Based on records supplied by eBay, we knew the gender of the recruits and the number of items bought on eBay in the past year. Measuring accuracy of the survey data was a simple matter of comparing the data provided by eBay to the responses given by participants. As Figures 5 and 6 show, neither survey design cosmetics nor incentives had any measurable impact on the data quality of objective questions. The percentage of incorrect answers provided by respondents (as compared to the data provided by eBay) was similar across plain-versus-fancy and incentive-versus-no-incentive comparisons. In addition, there were no systematic differences between the comparison groups in the way in which participants responded to survey questions, with the exception of product interest and respondent satisfaction with the survey experience.

While survey design and the use of incentives did not affect the data quality of the objective questions (such as gender or number of items purchased in the past year), the presence of an incentive did impact the data quality of the more opinion-based or subjective questions (see Figure 7). The survey asked the respondents to rate their interest level for several eBay products. For each product, respondents who were not offered an incentive were significantly more likely (at the 95 percent level) to express disinterest.

These results suggest that while researchers may be pushed to offer incentives in order to increase survey completion rate, there are some drawbacks to this approach. Though data quality may not suffer for objective questions, the presence of an incentive may be a source of bias by inflating respondents’ self-reported levels of interest in the products or services of interest to market researchers.

Respondent satisfaction

Finally, this study aimed to examine the impact that survey design and incentives have on respondent satisfaction with the survey experience. At the end of the survey, we asked respondents to provide feedback about their survey experience. Incentives, not surprisingly, enhanced respondent experience with the survey (see Figure 8). Respondents who were offered incentives expressed significantly higher levels of satisfaction with their survey experience. Previous research has shown that higher satisfaction levels will also lead to greater repeat survey participation. Therefore, it can also be speculated that these higher satisfaction levels influence the elevated product interest levels that were observed in the survey.

Further research needed

The goal of this study was to examine the effect of survey design and incentives on completion rate, data quality and respondent satisfaction with the survey experience. The results indicate that survey design aesthetics do not influence data quality or response rates. Minor changes in the look and feel - such as the use of color - in an online survey should not be a market researcher’s primary concern. Further research is needed, however, to better evaluate the impact of more intensive survey designs and formats, such as the use of animation, multimedia or Java applets.

On the other hand, offering any incentive (as opposed to none at all) increases the overall completion rate. Respondents who were offered an incentive completed the survey at a higher rate than those who were not offered an incentive. The type of incentive - be it a smaller, guaranteed cash incentive or a chance to win a cash drawing - does not matter. However, the impact that incentives have on completion rate occurs in the first minute-and-a-half of the survey. Therefore, it is crucial to engage respondents at this point. Dropout rates are twice as high at this stage of the survey compared to later on. (This is true whether you offer an incentive or not.) Decipher is currently conducting additional research to pinpoint exactly what aspect of survey writing will help keep respondents engaged past the all-important two-minute mark.

Finally, careful consideration must be made when applying incentives because they will, potentially, introduce a bias into the data. Respondents show elevated satisfaction levels when compared to those who did not receive an incentive. An increased interest may also appear in product concepts or ratings that fall more on the positive end of the spectrum. It cannot be said whether this is a good or a bad thing - there may be an equally powerful negative bias with those who do not receive an incentive. Incentives may encourage respondents to be more thoughtful about filling out surveys; yet incentives attract professional survey takers who simply do what it takes to receive the incentive, paying little regard to the survey itself. Given their impact, we recommend that as much consistency is maintained as possible when it comes to offering (or not offering) incentives.

References

Gray, M. & MacElroy, B. “IMRO Online Survey Satisfaction Research: A Pilot Study of Salience-based Respondent Experience Modeling.” IMRO’s Journal of Online Research. http://www.ijor.org/eval.asp?PID=1< /A>  (9 Jul. 2003).

Westergaard, J. “Your survey, our needs.” Quirk’s Marketing Research Review, November 2005.