Editor's note: Lori Dockery is lead research analyst at Vernon Research Group, a Cedar Rapids, Iowa, research firm.
My inspiration for writing this article came from a story by Dan Coates, MaryLeigh Bliss and Xavier Vivar in the February 2016 issue of Quirk’s (“‘Ain’t nobody got time for that’ The impact of survey duration on completion rates among Millennial respondents”) in which the authors compared median completion time with completion rate in order to find a rule for maximum survey length. At first, I only wanted to replicate their study with our respondents and see if my results would be similar. Once I began the task of reviewing our project data, however, I decided to expand the task to look at several additional variables. I wanted to be able to predict completion rate given my knowledge of other variables, too.
The task was bigger than I anticipated, largely due to differences in survey software. For example, some products did not capture survey completion time at all, while others captured elapsed time but did not take into account starting and stopping or leaving the survey open while doing something else for an extended time.
Also, I made the decision to only use studies which had been completed while I’d been working at Vernon Research Group in order to make certain I was accurately assigning some of the variables. This still gave me 104 unique data points over a four-year period. Many studies had unique situations for different participant segments, including different incentives and whether the client was masked or not, so each situation was treated as a different data point.
First, let’s take a look at the comparison of median completion time with completion rate so you can see how that compares to the findings of Coates, Bliss and Vivar. I used the time to complete each page of the survey instead of the total elapsed time to reduce the amount of error caused by respondents...