Editor's note: Jon Puleston is vice president in the London office of Seattle research firm Global Market Insite Inc. He can be reached at jpuleston@gmi-mr.com. This article appeared in the February 13, 2012, edition of Quirk's e-newsletter.

Once you understand the various ways in which surveys can be structured to feel more game-like, the next step drills down a bit farther to address the actual design of questions and how these can be gamified.

Using more imagery

Most computer games are enormously visual experiences so if you want a survey to become more gamified, imagery has a vital role to play. The research we have conducted over the last four years has shown us that respondents prefer answering more visual-based questions. Shifting to a visual-based questioning technique can improve respondent enjoyment levels from a score of three out of 10 to eight out of 10.

During one experiment (which I would not encourage anyone to reproduce in real life, as we do not recommend any survey to take longer than 15 minutes) we took a 40-minute survey and created two identical versions – one with imagery and one without. We compared the dropout rate and the propensity to take part in a follow-up survey and took a granular look at the data. Thirty percent more people completed the survey with imagery and in like-for-like questions, with significant improvements in click counts (in some cases up to 50 percent or more). We also found that 25 percent more people volunteered to take part in a second wave. All this meant we doubled the volume of feedback for certain questions.

Here is an example of a set of questions we asked about drinking water where we emphasized choices with supporting imagery. In the second question there were 35 percent more drinking incidents recorded across the day-part time slots.

Challenging the layout and design rules

Most survey technology was formed out of pen-and-paper thinking with rigid grids and frame structures and a limited visual repertoire or graphical design interface. We have found one of the best ways to make survey questions seem more fun to answer is to break away from conventional layout structure (see examples below).

We have reported in several papers how this improves the quality of data and the respondent experience: up to 80 percent less straightlining; lower neutral scoring (average 25 percent lower); and (if questions are designed ergonomically) dropout reduced from 5 percent to 1 percent.

The following is an example of the differences in consumer reaction to answering a question using a more gamified scrolling grid format. The example is taken from the experiment mentioned earlier where we tested a question format where choice options scroll across the page one by one as you answer them against a standard grid format.

In this experiment, respondents spent 23 percent more time thinking about their answers. Respondents also agreed that the gamified version was easier, quicker and more fun and were less likely to describe the gamified version as OK, boring or confusing, compared to the standard grid version.  

In another example, we switched to using star stamp effects. When selections were made, the choices were stamped onto the imagery. This encouraged nearly 50 percent more click selections to occur.

Adding selection rewards and feedback mechanics

The other thing that most survey technology is not equipped to do is give feedback. We have built and tested a range of feedback mechanics into our survey engine, allowing us to alert respondents when they have answered questions correctly and record and increment their scores.

We have found that including these can have a transformative impact on respondents' attitudes toward the survey experience. In one experiment we conducted with research company Mintel, we integrated reward feedback scoring mechanics and the percent of respondents who said they "really enjoyed" the survey jumped from 26 percent to 87 percent.

Using more game-like questioning techniques

We have experimented with a range of even more playful and game-like questioning techniques, including a space-invader game where respondents fly through space and shoot at option choices and a downhill skiing game (see below). Results varied. We found these more elaborate approaches had mixed appeal and could have a somewhat corrupting impact on the data, as respondents became confused between fulfilling the game mechanic and the task.

Common questions

There are two common questions I am always asked about game-play techniques: Who responds to game mechanics? And what impact does it have on the data?

To answer the first question, nearly everyone! That is, if you pitch it right. Basic word-play, rule-based survey games we have measured, have earned over 95 percent active participation. We have run upwards of over 100 game experiments over the last year and I can name less than a handful that did not solicit more rather than less active participation than the questions asked in a more traditional way.

As for the second question, the impact on the data is not inconsiderable. Often the results can be measurably different. There are several factors that influence this. First, if you engage people and they think more, you often get more responses, which is a good thing. That is the primary difference. Conversely, some game-play mechanics, particularly those based around predictive point-scoring tasks, can steer data off-course if the respondent's desire to win competes with giving honest responses. There are also differences in mind-set. When playing a game, respondents may be more enthusiastic, which can elevate brand evaluation scores, for example.

A creative solution

Over the last four years of research we've conducted to explore how to improve online survey feedback, gamification is the most powerful and effective method we have come across. However, there is no escaping that it is a creative solution. Most ways to gamify surveys require copywriting and design skills, as well as technical expertise. As in advertising, where good advertising helps sell more products but bad advertising doesn't, a well-executed game in a survey can be transformative and result in better feedback but a poorly-executed one won't have the same impact and could steer data in the wrong direction.

Whether you take the Field of Dreams approach and design a game and then find a research application for it or you take a task and figure out how to make it more game-like, both require a huge amount of experimentation.

Notes on methodology: GMI and Engage Research do not wish to purport the results of any of these experiments to be anything other than descriptive anecdotal evidence about the impact of game play. These experiments have very much been a general exploration of this topic.