The survey killer

Editor’s note: Deborah Sleep is director at London-based Engage Research. Jon Puleston is vice president of research firm Global Market Insite Inc.’s interactive group in London.

Online research has opened up a whole new world in the past few years but is it really living up to its promise? Are research professionals using online survey techniques as effectively as they could? Whose responsibility is it to ensure an excellent respondent experience, one that fosters long-term engagement, thoughtful responses and reliable research data?

Our respective firms recently set out on a quest to get clarity on these questions on behalf of all researchers worldwide. This article, the first in a series, goes to the roots of the issues and highlights the inherent consequences resulting from respondents losing interest during online surveys. Next month’s follow-up article will reveal how advances in technology offering novel question and response mechanics can help market researchers overcome this recurring challenge while making online surveys more interesting and easier for respondents to take.

Quite different

Depending on one’s role and position in the market research continuum, the issues, and therefore the solutions associated with them, look quite different:

Researchers depend on survey respondents being willing and able to answer questions, and on being able to interpret those answers in a meaningful way for their clients.

Clients can now afford to target large and niche samples around the world with online research, get answers to time-sensitive questions very quickly, and stretch their research budgets that much further. These benefits become less compelling if we start to question the quality of the research data, or create a situation whereby the cost of maintaining a panel increases.

Panel providers put a lot of effort and expense into recruiting panelists and turning them into a valuable business asset. A panelist who is bored during a survey will be less likely to take another one in the future, so happy respondents mean a more cost-effective business, and more creative survey design means winning more business.

Respondents want to participate in surveys that are interesting and engaging. While they appreciate the rewards they receive for completing surveys, they actually like giving their opinions - but want to be able to do this effectively.

Emotional connection

The fundamental difference between conducting online and offline research is the presence of a human interviewer. Human interaction is more able to establish an emotional connection with respondents. Getting them in the mood to answer questions in the right frame of mind is much harder with online surveys. With no interviewer in sight, the online respondent is more likely to drop out of a survey sooner. And without being proactively probed by an interviewer, they can progress through a survey with a minimum of thought and response. When participating in surveys online, it is easier for respondents to: say “I don’t know”; give minimal consideration to their answers; not pay attention when reading a question; skip past instructions; not read blocks of text.

Research reveals that most respondents complain that completing some online surveys is like wading through mud. They get very frustrated waiting for questions to load and constantly having to click the [next] button. If each question takes three seconds to load, on average, in a typical 60-question survey, respondents end up spending three minutes waiting for questions to load. This, combined with accidentally missing answers and getting error messages, all leads to added and unnecessary frustration. In general, respondents also dislike drop-down menu options and questions that are complicated to answer.

Need a makeover

Online surveys need a makeover. Many of them still look like static online forms, putting too much focus on the questions themselves and not enough on the techniques used to deliver and communicate them to the respondents. The lack of survey fluidity then automatically feeds into a lack of fun and engagement.

The market research language of surveys is also a factor in boring respondents. Reading statements such as, “Please indicate how much you agree or disagree with the statements below on a scale of 1 to 10, where 1 means you completely disagree and 10 means you completely agree, and 5 means you neither agree not disagree” takes six seconds from start to finish. But if the question is laid out properly, this verbiage is totally unnecessary and could simply be reduced to “Please rate the following statements.”

Respondents appreciate the opportunity to provide their opinions when asked to evaluate new and different ideas, a much bigger - and sometimes still underestimated - driver than the rewards they receive for doing it. But the real question is: are we as research professionals encouraging and enabling respondents to do this as best we can to get the best results possible? To answer this question, we first need to examine what causes respondent boredom.

Memories dry up

When boredom sets in, online respondents usually still complete the survey, but they stop giving such considered answers. Their memories dry up and, if they don’t enjoy the experience, they are far less likely to participate in subsequent surveys sent to them in the future. So how many really do drop out? And if they don’t go so far as to drop out, how else does their response behavior change? Our research revealed some interesting findings around these questions.

Survey dropout rates

We examined the dropout rates from over 550 interactive surveys conducted on the GMI interactive Flash survey platform, and correlated this with the survey length and question formats used in these surveys. All the surveys included a progress bar showing how much of the survey was completed.

The average completion rate for surveys under 60 questions (typically 20 minutes) was 82 percent. So while it is reassuring to know that the vast majority of respondents do complete the interactive surveys they start, we wanted to understand when dropouts occurred and what was driving them.

  • The first area where dropouts tend to occur: within the first 15 questions. Once past this threshold, a respondent is more likely to complete the survey, regardless of length.
  • For surveys containing between 10 and 60 questions, there is no apparent relationship between survey length and dropout rate (incentives typically increase with the questionnaire length).
  • In surveys longer than 60 questions, dropout levels appear to increase and are certainly less consistent, as shown in Figure 1.
  • The possible correlation between survey completion times and dropout rate showed no apparent relationship (this may be due to the varying nature of the questions).
  • Survey respondents report that relevance of subject matter and interest in the questions are the main factors influencing whether they complete a survey.

The number of options

The key factor triggering dropout appears to be the number of options presented in a question. Figure 2 shows that dropout significantly increases when questions show more than 20 options on a page.

Question types

The question format also has an influence on respondent boredom. Grid formats and matrix questions appear particularly prone to dropout (Figure 3):

  • Nearly 60 percent of grid questions triggered some level of dropout, compared to 30 percent for standard single-choice questions;
  • Check-box (multi-choice) questions were the second most likely to cause dropout.

Question repetition

More worryingly for panel providers, a dozen surveys containing repetitive question sets of three or more similar questions showed that these can trigger dropouts of up to 5 percent of respondents, depending on where they are placed in a survey. Repetition is a killer, particularly on matrix questions, which are the most popular question format of most surveys.

Speeding effects

A simple survey with a range of different questions was used to test the increase in answering speed caused by the onset of boredom. The positions of these questions were swapped for different cells of respondents, some seeing a given question at the beginning and some at the end. The time each respondent spent to answer each one was recorded. The test survey was approximately 16 minutes long, sampling 789 respondents split over seven cells, with a minimum of 100 respondents per question per position.

  • For the same question, the average time spent answering decreased by 17 percent when it was at the end of the survey, compared to when it was at the start.
  • Time spent answering the first question in a repetitive set, compared to follow-on questions, showed notable speeding up through a set, with an average decrease of 22 percent in time spent answering a follow-on question.
  • Particularly prone to speeding effects near the end of a survey are instructions and open-ended questions. Reading times for instructions dropped by an average of 22 percent (halving in some cases), and respondents spent 25 percent less time answering open-ends.

Changing characteristics of response data

The time respondents took to answer questions was shown to affect the character of their responses. The five key things observed were:

  • fewer check marks in multiple-response questions;
  • a shift toward the neutral and slightly negative when answering range questions;
  • generally a reduction in extreme responses;
  • more pattern-answering within long question sets;
  • shorter text responses in open-ended questions.

The test revealed an average 10 percent drop in the number of check-box selections in multi-response questions (Figure 4). When answering typical five-point-range questions, neutral/don’t know responses rose by 18 percent and we observed a 7 percent shift from slightly positive to slightly negative. Very positive and very negative responses both fell significantly by an average of 25 percent, but the differences tend to cancel each other out.

Incidences of pattern-answering were also examined. We examined cases where respondents answered more than five questions in a row in the same way and measured an average 38 percent increase in pattern-answering activity when the same questions were asked at the start vs. the end of a survey. Open-ends appear to be the most sensitive question format to boredom factors. The two-order test showed an average 41 percent decline in the word count with the same question being asked at the start vs. the end of the test survey.

Clear opportunities

It is important to remember that most respondents did complete the surveys they started and that the behavioral patterns we have been examining are marginal ones that occur among the least-engaged respondents. However, we can see that there are clear opportunities to improve on the way we build online research in order to better engage with all respondents.

The exciting challenge we have in front of us is to completely rethink conventional question approaches. For more important online surveys, there is a much greater level of investment needed in design and development, and a paradigm shift of thinking is required in the profession. We must recognize the importance of design and communication in the online survey process, changing our mindset around how we communicate with respondents by not just focusing on the information we want to find out but also on the need to engage respondents in the experience, starting with the language that is used in surveys.

Online respondent engagement is not out of reach - we just need to be more creative about it. In next month’s issue, we will focus on the solutions available to generate greater respondent engagement and show how interactivity can be an easily-implemented cure for the common online survey. We will reveal the findings of the next phase of this research project, which compare the results gathered from regular online surveys with the ones from surveys using interactive Flash elements.