The survey 'burden factor'

Editor’s note: Jennifer Drolet is vice president client services and moderating services at Denver-based iModerate Research Technologies. Alice Butler is vice president at M/A/R/C Research, Irving, Texas. Steve Davis is executive vice president and COO at e-Rewards Inc., a Dallas research firm.

Over the years, there have been many published research projects that explore survey length, what the “ideal” length of a survey is and how length impacts respondents, their behavior and data quality. Some projects have explored the diminishing response and panelist retention rates that can accompany longer surveys, some have explored the risk of straightlining due to survey length.

It can be challenging to accomplish all of a project’s research goals with a limited survey length. So, many researchers have explored a number of ways to overcome this issue - through survey design tips, breaking one long survey into two shorter surveys, and so on. In the world of online research, there are more challenges. It’s critical for researchers to provide engaging experiences while fielding survey instruments that capture high-quality data.

Perception is described as the process of attaining awareness or understanding of sensory information. The problem with attaining an accurate perception of reality stems from the fact that humans are unable to understand new information without the inherent bias of their previous knowledge.

Do these same ideas apply in the field of online surveying? Do the types of questions (traditional vs. engaging) in a 30-minute survey have an effect on data quality and survey results, respondent behavior (the survey burden factor) and the respondent experience? Can you affect the perception of a survey being burdensome by changing elements of survey design to become more engaging and interactive with the respondent? We conducted research on research to answer these questions.

Restaurant survey

A 30-minute survey was conducted among 1,132 e-Rewards Market Research panelists, 13 to 64 years old, with age and gender quotas to ensure balanced representation. The survey consisted of a screener, demographics and two question blocks with similar questions - one about quick-service restaurants (QSR) and the other about casual-dining restaurants (CDR). The QSR and CDR blocks were rotated (approximately half of the respondents received QSR questions first and the other half received CDR questions first). Each block contained awareness, usage and attitudinal questions.

Additionally, approximately half of the respondents completed a traditional version of the survey and half completed the engaging version. The survey ended with survey experience questions and 107 respondents completed online chats using technology from iModerate. Data collection spanned January 9 through January 13, 2009.

Significant differences

Some significant differences in different question formats were discovered. The five question types in both the engaging and traditional cells, as well as overall findings, are explored below.

Static boxes (traditional) vs. triggered boxes (engaging)

The first two traditional and engaging surveying methods that were explored were static vs. triggered boxes. To test the methods, respondents were asked to recall “What fast-food restaurants can you think of?” In the case of the static boxes, respondents were shown 10 boxes to provide 10 answers. In the triggered boxes, they were given one box initially, which if they completed, resulted in an additional box and continued until they quit typing or filled 10 boxes. 

The recall of QSR brands for those respondents with the traditional cell showed a 10 percent higher unaided brand awareness across most brands compared to the QSR brands recalled in the triggered boxes for the engaging cell. On average, the number of restaurants respondents recalled for the traditional cell (static boxes) was 7.1 QSR, compared to 5.2 QSR for engaging cell (triggered boxes). When provided with 10 boxes, respondents felt compelled to fill as many boxes as possible.

Respondents ranked ease of use and enjoyment nearly the same on both types of questions (Figure 1).

Check-box grid (traditional) vs. logo card sort (engaging)

The next comparison explored usage of restaurants utilizing a check-box grid vs. logo card sort methodology. In the traditional cell, respondents were asked to check the box in the grid indicating the time they most recently dined at each of the CDRs, whereas in the engaging cell, respondents were asked to place each restaurant’s logo in the box indicating the time they most recently dined at the CDR. The percentage of respondents that visited a CDR within the time frames (past four weeks, one to three months, four to 12 months, and over a year) closely mirrored each other regardless of the survey experience.

However, when polled on ease of use and enjoyment of the two question designs, almost 60 percent of the respondents who received the question in the card-sorter format (engaging cell) ranked the format as enjoyable, compared to 33 percent for those who received the traditional grid question. Thirty-five percent felt the traditional method was boring, and 81 percent felt the card sorter (engaging design) was easy to use, compared to 73 percent of the traditional-design respondents (Figure 2).

Respondents also provided some insightful comments supporting the ease of use and enjoyment of the card sorter format (engaging cell):

“I especially liked the questions where you moved the picture logo into the correct box.”

“I enjoyed the way that each section had a different method of input. I get bored selecting boxes all of the time. I especially liked dragging to pictures into the boxes. It felt like I was playing a game of solitaire instead of answering a survey.”

“I really liked being able to drag and drop the answers into the appropriate buckets instead of having to click in the circles - I always miss some! I could change my mind on an answer without a problem, and the survey didn’t yell at me for missing a box.”

Long grids (traditional) vs. short grids (engaging)

The next comparison tested attribute grids. The first group of respondents received one long grid with 21 questions, and the second group of respondents received three shorter grids, each with seven questions. Both grids asked respondents to rank a CDR on various characteristics. The responses regarding the combined attributes of qualities regarding the CDR were very similar regardless of the survey experience, although there was a slightly higher top box in the traditional cell.

Respondents of both question-design types ranked ease of use and enjoyment very similarly. However, respondents were a lot less likely to straightline the attribute grid if it was broken up into shorter pieces (Figure 3), and the qualitative feedback received clearly supported respondent preference toward shorter grids, and that there is a significant chance that data quality suffers when respondents are presented with cumbersome grids. For example, one respondent said, “I don’t mind the bubbles ... but there were seriously like 30 different questions in a line. I think if they were broken up into groups ... it wouldn’t have felt nearly as overwhelming. It reminds me of a bad Scantron test.” Another respondent said, “Well, it becomes tedious to look at a Web page filled with grids of products and boxes or experiences and boxes. In my opinion, by the time you reach the bottom of the page, you tend to care less about how accurate your response is.”

Number-entry grid (traditional) vs. logo slider (engaging)

To test comparative ratings, respondents were presented with either a number-entry grid or logo slider and were asked to rank QSRs on characteristics like cleanliness, prices, quality of ingredients and more. Respondents of the number entry grid (traditional), simply entered a number on a scale of one to five to indicate their ratings. Respondents with a logo slider design moved the logo to place it under their rating. Again, the combined responses are very similar between the two cells, with slightly higher top-two box ratings in the traditional cell.

Nearly 80 percent of respondents using the logo slider felt the experience was either neutral or enjoyable, about 15 percent more than entry-grid respondents. And, when asked at the end of the survey what parts of the survey were most interesting, respondents cited the logo slider:

“I believe the drag and drop inputs and brightly colored brand logos with the companies name made it more enjoyable to take this survey.”

“[The most interesting part was] when I was comparing, I believe, KFC-McDonald’s-Subway and I had to drag the logo on a scale from 1 to 5 that answered the question given.”

Additionally, straightlining among the entry-grid respondents was two-and-a-half times that of the logo-slider participants.

Grid (traditional) vs. card sort (engaging)

In the last question-design comparisons, respondents were asked to determine the importance of various characteristics in selecting a fast-food restaurant. In the traditional cell experience, respondents were given a grid of 26 questions and ranked them as “not at all important,” “somewhat important” or “critical.” In the engaging experience, respondents were exposed to a card sort and asked to rate the importance by placing the card (containing the characteristic) in the appropriate box (rankings).

The combined importance of the different characteristics of QSRs was very similar for both survey types. Eighty-three percent of respondents with the card sort ranked the question type easy to use as compared to 73 percent. In terms of enjoyment, over 90 percent of respondents were neutral about or felt the card sort was enjoyable, and only 10 percent felt it was boring versus 32 percent who felt the grid was boring. For this question comparison, 7 percent straightlined the grid, but only 2 percent straightlined during the card sort (Figure 5).

Interesting or enjoyable

The research showed that 47 percent of respondents put through the engaging survey experience felt the experience was interesting or enjoyable compared to 39 percent of those with the traditional cell. Interestingly enough, more respondents taking the engaging survey felt the survey was extremely long (57 percent), versus those taking the traditional (50 percent), but then 51 percent of engaging respondents felt the survey was better, compared to all other online surveys, compared to only 33 percent of the traditional (Figure 6).

It’s important to remember the topic (QSR and CDR), and that the survey was 30 minutes. Additionally, the average survey length did in fact take longer for the engaging survey (Figure 7). In exploring dropoff rates, the engaging survey experienced only an 8 percent dropoff rate while the traditional experience had a 14 percent dropoff rate.

Visual interest

Overall, the qualitative discussion for the engaging cell focused heavily on the drag-and-drop and slider elements, which respondents felt were interesting, different and easy-to-use. And, in the traditional cell, several respondents specifically remarked that the survey could be made more interesting if it offered some visual interest or interactivity.

In commenting about the length of the survey, respondents in the traditional cell often thought the survey was long. Their impressions of length seem to be based as much on their sense of tedium as on the actual time spent completing the survey. For those in the engaging cell, despite the diversion of the interactive question format, respondents still often felt that the survey was long (and it did take longer than average to complete). However, because they weren’t bored by it, the length was not particularly bothersome.

Have an impact

The way a survey is designed can have an impact on results and perception of survey length.

Particularly, making a survey design more engaging can: improve the respondent experience; decrease dropoff rates; change the way that some people answer questions; and decrease the number of inattentive respondents.

However, some respondents will become inattentive during long surveys, regardless of the survey experience you provide.

Each stakeholder in the research community has a responsibility in the process to ensure research is of the highest quality. (We plan to further investigate the survey result differences we saw in the traditional grids [higher top and top-two boxes], versus the engaging question methods we used.) Panel providers should identify and remove undesirable respondents from their panels. And, researchers should carefully evaluate aspects of questionnaire survey design to see how it may impact the respondent experience and ensure that end-clients understand the impact that length and design can have on the experience as well as the data. 

Suggested quality guidelines

• When gathering unaided awareness, place separate boxes on the screen in order to gather the most responses.

• If the survey is over 15 minutes, place tedious and repetitive tasks early in the questionnaire.

• Insert engaging question types throughout the survey to keep the respondents interested.

• Limit the use of grids, especially long ones that require respondents to scroll.

• Use brand logos for rating questions when possible. It helps the respondents connect he rating with the brand.

Bonus Web content

Acknowledgements

The authors would like to acknowledge the additional contributions to the article from the following companies and individuals. From iModerate: Adam Rossow, vice president of marketing; Christy Tchoumba, senior director of client services; and Joan Rinaldi, senior director of technical services. From e-Rewards Market Research: Randy Medders, art director; Ashley Harlan, director of corporate communications; and Blythe Moore, public relations specialist. From M/A/R/C Research: Sarah Baird, senior data analyst; Kristen Downs, desktop specialist; Karen Shue, director of sampling; Susan Hurry, senior vice president.