Editor’s note: Bill MacElroy is president of Modalis Research Technologies, Inc., San Francisco.

The popularity of conducting research online has prompted many questions regarding the impact of various conditions under which surveys are conducted. In particular, the interaction between length of survey (both in terms of time and number of questions) and incentive (either total incentive offered as a prize package or the approximate value of the incentive on an individual basis) has been thought to influence the number and proportion of mid-survey abandoners. This article will discuss the findings from 19 Web-based studies conducted from January 1 to April 25.

In order to remove the bias that might be caused by different populations and survey topics, all of the studies used for this analysis were conducted with the same general target audience and all involved business-to-business technology-related decisions. The total number of respondents included in these surveys was 21,867, with the median sample size consisting of 473.

The primary focus of this discussion will revolve around trying to determine the degree to which various factors influence the rate at which people dropout of a survey once they have begun the process. Also referred to as “mid-terminates,” this statistic tends to be an indication of the point at which respondent fatigue, boredom, or lack-of-perceived-value becomes critical. As a rule of thumb, when surveys have a mid-terminate rate of more than 25 percent a post hoc evaluation of factors leading to the problem is probably a good idea.

Variables

In specific, we have chosen four variables (shown above) that have been considered suspect in creating radical dropout rates.

As a first step, a simple linear regression was conducted to determine the drivers of mid-terminate behavior. The results had a fair predictive capability with a multiple R-square of .524. The degree to which each of the four variables is important in explaining drop-outs (as measured by standardized beta coefficient) is shown in the chart.

Chart 1

Only two of the variables, “Total Incentives” and “Average Time,” were significant at the 90-percent confidence level. This finding was a bit of a surprise, in that I had expected (and have previously observed instances in which) all of these variables to be quite significantly influential in predicting dropout rates.

The explanation as to why the linear regression was not a very good model was quite simple: the variables are not well modeled by linear functions. In fact, each of the variables is better modeled using curve-fitting software.

Graph 1

Total incentives
The curve that best fits the relationship between the total amount of incentives to be paid as a drawing-type incentive and the level of drop-out shows that a certain threshold of total prize money must be present to avoid critical mid-terminations (see graph above). This model indicates that for the B2B surveys studied a prize package of just over $1,000 was needed for more than 70 percent of this audience to complete.

Table 1 shows the predicted mid-terms based on total level of prize incentive offered.

Table 1 and Table 2

Total average time of survey

The next variable, total average time of the survey, was also highly predictive of the mid-terminate rates. Surveys that took more than 17.5 minutes led to predicted completion rates of less than 70 percent (see graph below).

Graph 2

Table 2 shows the predicted dropout rates for surveys of varying lengths.

Known value of the incentive

The cash-equivalent amount that each respondent would receive had an interesting relationship to the proportion of predicted dropouts. A value of only $5 would still leave a predicted 78 percent completion rate. Once the value hit $22, the curve flattened noticeably, indicating that radically increasing the individual incentive rate above a certain level does little to influence the proportion of those who abandon the survey (see below).

Graph 3

Table 3 shows this relationship between known rewards and predicted dropout rates.

Number of screens

The final variable, number of screens (and in these cases very close to total number of questions) showed a logical relationship to dropout rates: the more screens/questions the higher the mid-terminates. In the projects included in this analysis, surveys that exceed 30 screens/questions are predicted to exceed the maximum level of dropouts, which we set at 30 percent.

Table 3

Table 4

We have also done usability studies on online survey design that show that the content of questions also has an impact on perceived ease-of-use. Trying to cut the number of questions by creating long explanations and/or complex structures is often more annoying to respondents than splitting the issues into several, smaller question sets (see graph below).

Graph 4

Table 4 shows that surveys that exceed 30 questions rapidly have much higher dropout rates. Note: Although increasing incentives for longer surveys can control some dropout rates, our studies show that even when large incentives are offered to potential respondents in this difficult-to-reach population, these do little to stay the abandonment rates associated with long surveys.

Conclusions

The anecdotal suppositions that length of survey and amount of incentive interact to prevent dropouts from an online survey appear to be confirmed by these few studies we’ve examined. While more work would no doubt produce more sophisticated models, we might begin with the assumption that B2B surveys which consist of fewer than 30 questions/screens, that last no longer than 17 to 18 minutes, that have a total drawing package worth at least $1,000, or that have an known incentive of approximately $20 will probably yield the best results.

Based on my discussions with companies who focus on consumer-related studies, it is my opinion that similar results would apply in the B2C market as well.