Making it easier to think green

Editor’s note: Amy J. Hebard is chief research officer at Earthsense, a Syracuse, N.Y., marketing and research company.

What if? What if we changed the look and feel of marketing research questionnaire design, away from sterile radio buttons in grids, and thought of surveys (when online) as video games, or even as dialogues between trusted friends?

Would candor collapse? Would our sampling frames be skewed? Would our data be destroyed? Of course not. Not when done with care, as our firm found out.

Now in its third wave, Eco-Insights is an online survey conducted by my company, Syracuse, N.Y.-based Earthsense. Our aim is to give clients insight into consumers’ green attitudes and behaviors. To do this we survey 30,000 U.S. adults online each time, with a nationally representative sample.

The survey is long (35 minutes on average), but to get a true picture of how people are thinking and acting, we need to cover a lot of territory - values, attitudes, motivations, purchasing and other behavior, demographics, etc.

The green market, after all, is exploding, with increasing numbers of new green products released at a dizzying pace. Total organic sales in the U.S. - both food and non-food products - jumped from $17.7 billion in 2006 to an estimated $25 billion last year. Mass-market grocery stores represent the largest single distribution channel: the most recent Organic Trade Association survey of manufacturers estimates that 38 percent of organic food sales were made in these stores.

Our clients need more product category coverage, not less. Thus we needed to make sure our design was as efficient as we could make it, and the experience as engaging as possible for our participants.

Against this backdrop, we decided in 2009, beginning with the third wave, to conduct the survey more frequently and now do so twice-yearly. The result? Fresh content, timely outputs and (in theory) a shorter instrument.

While we recognized the challenges of trying to capture a broad range of information in a way that would keep survey participants from becoming frustrated, we knew that Web 2.0+ capabilities increasingly provide tantalizing opportunities to engage respondents and reduce fatigue while allowing us to respect their time and ensure the quality of their input.

Set objectives

With these challenges and opportunities front-and-center, we set some ambitious objectives for ourselves:

•  tailor the survey to show respect for respondents (engage them and make the experience interactive - a dialogue; respect their time);

•  solicit respondents’ feedback about their experience;

•  achieve lower survey dropout levels, compared to industry standards;

•  show appreciation by sharing with them some of what we learn;

•  and last, but not least: have a little fun in the process!

These objectives are not always standard in the industry. We needed “partners in crime” willing to work with us to make this happen, so we solicited competitive bids to select both our sample provider and programming support. We chose to partner with Shelton, Conn.-based Survey Sampling International (SSI) for use of its Survey Spot panel, seeing the firm’s Respondent Preservation Initiative as right up our alley. For programming support we turned to Boston-based Bernett Research.

Lot of survey real estate

Eco-Insights covers a lot of survey real estate:

•  140 product categories (does the consumer buy green or conventional and why?);

•  channels consumers shop in, including major mass-market grocery chains, mass merchandisers, and club, discount and drug stores;

•  attitudes and behaviors toward health and the environment;

•  changes in green-buying behavior;

•  a range of demographic and other profiling questions;

•  geo-coding to understand geographic drivers of behavior.

We close the survey with two questions, one a five-point rating scale where participants rated their survey experience (“To what extent did you enjoy taking this survey?”), followed by an open-ended question (“Please tell me why you gave this rating.”) The results were very informative, enabling us to draw some conclusions about what we need to do more - and less - of this fall when we carry out our fourth wave.

The respondent experience

We measured the respondent experience in three ways: How did they describe it? How did they rate it? And (to examine engagement) was their dropout rate lower than average?

Descriptions: No surprise - “long” was the most popular adjective they used when they explained how they rated their experience. (Sigh.) But the other terms that rose to the top gave us hope, starting with “like” and “liked.” Nearly every frequently-used descriptor was positive, words we were pleased to find used to describe Eco-Insights.

Experience ratings: Few found the experience of taking the survey negative: indeed, the vast majority - 90 percent - gave it a 3, 4 or 5 rating (where 1 = not at all enjoy, 5 = enjoyed it a great deal). With no norms yet to compare this result with, we are using the data in two ways. First, this rating is the benchmark we will use to measure improvement over time in subsequent waves. In addition, we are open to comparing results with fellow researchers with other large, complex studies. (Interested? Let us know!)

Respondent engagement: An important measure of engagement is completion. Our expectation was that respondents would be willing to hang with us when we show respect for their time and contributions. We were pleased to find that, compared to SSI’s industry benchmark with studies of comparable length, our dropout rate was 14 percent lower.

A few key things

We credit innovations in the respondent experience with these results and want to share with others a few of the key things we did to achieve them:

Interactive: The principle of making the experience interactive guided our design in a number of ways, and many participants noticed.

“Love the interactive format! I wish more surveys were done this way. Thank you.”

“I really liked the interactiveness . . . It was probably the most fun and interactive survey I have ever taken.”

Personalization: From the opening screen to the farewell, respondents realized this was not their father’s online survey. We introduced the Eco-Insights survey avatar Veronica, our respondents’ personal Sherpa, and her first question was about their name. As the plan was for her to interact with them throughout the instrument, “Hey you” just wouldn’t do. Their interactions would be personalized but still anonymous, of course.

Respondents said the use of survey avatar Veronica helped personalize the usually impersonal act of taking a survey.

Veronica appeared 15 to 20 times, depending on skip patterns, to: provide information (e.g., how to use a response scale or to introduce a verbatim question); give encouragement and status (“Not much longer now!”); or ask questions directly. She was frequently mentioned when we asked for feedback about the experience and reactions were overwhelmingly positive, including more than one request for her “phone number”!)

“I liked the approach of this survey, as if I was speaking with someone.”

“I like it when people call me by my name. Not like others that just say ‘you.’ I liked taking this survey - it felt like someone was there talking to me because of the cartoon lady.”

“I actually like the use of ‘Veronica’ and the use of her ‘words of encouragement’ and ‘updates’ on my survey progress.”

“It didn’t feel as long as it was due to the fun little lady calling out my name. :)”

Visualization: We tried to think more like designers of video games than surveys. Exploiting new capabilities enabled by the Web, we used Confirmit (version 14), challenging our programmers to high heights of creativity. In the green world you hear talk of being “off the (electrical) grid.” That translated, in our survey, into becoming anti-grid (survey grid, that is) as much as possible. We used reveals, masking and picture drag-and-drops in place of text to represent products we were testing:

“I really like the graphics. If this had been a survey with only text I would not have even bothered finishing it, because of mental/visual fatigue.”

“I felt that the dragging and interactive aspect of it was much more engaging and better managed than merely clicking dot after dot to answer the same information.”

There are a few caveats. Using pictures added considerable development time and it’s critical to program them for pre-loading, otherwise you’ll try people’s patience. And, word to the wise: use them sparingly. We covered 139 other products. Need we say that was too many?

Response format variations: People get bored quickly with repetition. For that reason, we incorporated several different response formats in addition to drag-and-drops. Sliders, in particular, were well received:

“All the dragging and dropping was cool, and the sliders make answering more fun and easy. Way less boring than most surveys!”

“The option to use a slider or enter a number helped speed up some questions for me.”

“Although it was a long survey that I normally would have closed out of by now, you made it entertaining enough to keep me.”

A handful of people mentioned difficulty dragging the sliders; one claimed her responses weren’t accurate and we should consider eliminating them. (We did.) Having thoroughly tested the sliders and other response formats with a variety of download speeds and screen configurations, we knew this rarely occurred. The message to all, though, is this: test, test, test.

Survey length: This was our bugaboo. Despite a focused effort to eliminate anything unnecessary, the green space is so big that much needs to be covered to represent this market effectively. Of all the adjectives respondents used to describe the experience, “long” was the first mentioned by many.

“The survey was tooooooooooooo long.”

“I didn’t like the length - [it was] too long, [but I] did like the animation and encouragement to continue.”

“It was more fun than most, but toooooo looooonnng.”

Length considerations have been important to date, and will be again with our next wave this fall.

Foundation for refinements

Ratings, survey feedback and “other/specify” verbatims are providing the foundation for refinements to the survey. In addition, sharing what we learn is an important contribution we have made to the partnerships with both SSI and Bernett, as this can only help all of us in our common goal of improving the respondent experience. We also shared some of our learnings with Survey Spot panelists. With the economy pressing on everyone’s minds we thought the accompanying chart, for example, would be of interest. We found that, contrary to popular belief, many people claim to be consistent with their green buying - at whatever level it had been before. The challenge, of course, is to understand those whose behaviors are changing and how - but that’s another article for another day!

Ultimate survey sin

Of the more than 30,000 respondents who took our spring 2009 survey, one took us to task for, as she put it, “commit[ing] the ultimate survey sin of anthropomorphizing. You are NOT a person, g-dd-mn it.” She is entitled to her opinion, but we respectfully disagree that this is the ultimate (or even necessarily an important) “survey sin.” The survey sin that is egregious is the tedium and boredom caused by (and disrespect for respondents’ time illustrated by) the many poorly-conceived surveys, online or otherwise, that our profession has fielded in the past.

Rather than anthropomorphizing Veronica, our intent was to show a human face to our survey respondents, and interact with them - to the extent we can virtually - as fellow specimens of the same human species. Panelists (or respondents, or survey takers, call them what you will) are not ciphers. They are people. Veronica is part and parcel of our goal of treating them as such.

So, we close with some of the guidelines we use, and will continue to use, in coming waves of the survey, in hopes others pursuing a similar path will find them valuable:

Aim for engaging. Take the time you need to do the survey right. Program it with respondent preservation in mind, acknowledging that, yes, a more engaging process does require crafting (no cookie cutters) and may take extra time.

Personalize. Keep the consumer engaged through personalized touches, while maintaining appropriate distance and objectivity.

Respect their time. Encourage respondents as they make progress; be honest at the outset about the survey length, especially if it’s long. Break the instrument into separate studies if possible.

Be interactive. Create an interactive environment that feels like a dialogue by using an avatar to interact with the respondent.

Get feedback. Strive to decrease dropout levels by asking for candid feedback about the survey experience at the end, and use it when designing your next study.

Reciprocate. Share some key learnings with respondents, so they feel appreciated and realize that you value their time.

Even refreshing

In closing, for the Eco-Insights team, the feedback we received was often insightful, specific and even refreshing: 365 people actually used the closing verbatim to say thanks for the experience!

Veronica thanked them, too, as she bid them farewell: “Thank you so much!!! Your opinions will help shape the way companies think about doing business in the future.”

She’d like that to be true for the market research industry, too!

Test the effects

A great deal of research has demonstrated that long questionnaires yield poor data quality. This occurs in at least two ways. First, longer questionnaires yield higher dropout rates than shorter ones, which causes non-response bias. Second, satisficing - a response behavior whereby people don’t expend the effort required to give careful, thoughtful, truthful answers - causes a drop in data quality. Moreover, fatigue, induced by questionnaire length, causes an increase in satisficing.

One way for researchers to test whether their questionnaires are too long is to field a one-question survey using the last (substantive) question of a previous study. This mimics an experimental design whereby length is manipulated by flipping the order of questions. Any differences in responses can be attributed to either question order effects or length.

If a questionnaire is deemed to be too long, and shortening it or splitting and recontacting respondents is not a viable option, Earthsense’s research suggests that creating an engaging experience can reduce signs of satisficing. However, some research shows that new methods can be engaging, interactive, and a “nice change of pace,” but unfortunately these techniques can also be confusing, difficult and distracting. So again, researchers ought to test the effects of efforts to make questionnaires more engaging before fully implementing a new method.

- Philip Garland, senior methodologist, Survey Sampling International