Editor’s note: Giselle Lederman is a survey methodologist with San Francisco research firm Zoomerang. .

Research-on-research suggests that appropriate formatting is vital to successful Web-based surveys. The right formatting engages your respondents, makes it easier for them to navigate through surveys and maximizes your return rates. Correct formatting also helps elicit unbiased opinions so that you can collect highly accurate feedback that supports your critical decisions.

So if you spend your days designing surveys, you may often ask yourself if you’re formatting them optimally. When should you use a radio button rather than a drop-down, for example? Or, more importantly, what general principles of design should you follow?

Radio buttons

Traditionally, radio buttons are represented as small round circles in Web design. Radio buttons should be used when there is only one answer from a predefined set of options. In other words, responses should be mutually exclusive. Radio buttons are also the best choice for either/or items.

A few findings:

  • A 2002 experiment in Belgium revealed the advantages of using radio buttons. Two groups were given the same survey: one with radio buttons and the other with drop-downs. Participants using radio buttons were more likely to complete the survey (88.37 percent) than those using drop-downs (84.07 percent)1.
  • The use of radio buttons that offer the user noncommittal answers such as “don’t know” did not increase the likelihood of such non-substantial answers being selected1.

Checkboxes

Checkboxes are similar to multiple-choice categories, which are often used in paper-based surveys. These response categories can be mutually exclusive or can be used when multiple answers exist for a single question.

A few findings:

  • Recent research on Internet surveys and response categories tends to treat radio buttons and checkboxes as one and the same, so conclusions regarding impact of using a checkbox vs. a radio button are not yet definitive.
  • It appears that stationary, visible checkboxes may reduce end-user mistakes.
  • The use of multiple option “check all that apply” questions in Web surveys may result in respondents filling out what they perceive to be the “appropriate” number of responses, then skipping to the next question. The use of this special type of checkbox may result in measurement error. Published research regarding these issues is minimal, and as a result, multiple checkboxes in Web surveys should be used with caution2.

Drop-downs

Drop-down boxes can initially appear in your survey as a blank box with instructions to scroll down. This type of response format requires the user to locate the answer, click on the box and/or scroll down to locate their answer.

A few findings:

  • Because response categories are not totally visible on the initial screen, drop-down options should be used for fields with which respondents are already familiar (such as their state of residence). If drop-downs are used, it’s important to organize response options in a logical way (e.g., alphabetical listings).
  • Research indicates that the presence of drop-downs or radio buttons didn’t impact users’ tendency to give “don’t know” responses or leave items blank. However, drop-downs took more time for users to complete3.
  • Visibility and primacy are key factors in influencing choice on both Web- and paper-based surveys. When comparing two different response formats - radio buttons and drop-downs - the order in which the answer options are presented impacts the likelihood of one being chosen over the other. This ordering effect seems to be magnified when using drop-boxes, especially when not all options are initially visible. Ordering effects may be more pronounced when the first five options are displayed initially and the rest are hidden3.

Fill-in boxes

Text or fill-in boxes are blank spaces in which users can enter free-form answers. Based on our field experiences, text boxes are ideal when you either have more answers than can be accommodated by a drop-box or when you want to hear exactly what users have to say.

A few recommendations:

  • Providing clear, concise directions to users gets the best results.
  • Motivation levels may play a stronger part in getting users to complete text boxes than multiple-choice boxes, since they require more effort and thought.
  • Placement of fill-in boxes may also be important. Using them earlier in the survey is generally better than later. However, you should avoid using fill-in boxes at the top of the survey, as many respondents want to start with easy, non-controversial questions.
  • Fill-in boxes may have higher skip rates, so it’s wise to use them sparingly.
  • Users typically fill the space provided. So when using fill-in boxes, it’s important to allow sufficient space. If you want shorter answers, limit the space provided. If you want longer responses, use larger boxes to accommodate wordier responses.
  • And, as many survey designers already know, informing subjects that fill-in answers are optional may increase the likelihood they will not complete those areas of the survey.

Variety is good

In the world of surveys, even the smallest details - a radio button, for example - can mean the difference between a successful or unsuccessful survey. Yet none of the aforementioned research results point to one response category as being empirically better than another. Variety is good: the research implies that alternating between response categories often helps to engage users and counteract fatigue.

To ensure you’ve created the most engaging survey possible, try to experiment with different formats in your pre-tests, which can help you uncover inefficiencies before you send out your surveys. Consider the space available. Think about the number of questions you’ll have. Evaluate the visual clues that will move respondents from one question to the next. And most importantly, think of your audience - who they are and how they wish to give feedback.

References

1 Heerwegh and Loosveldt. “An Evaluation of the Effect of Response Format on Data Quality in Web Surveys.” www.icis.dk/ICIS_papers/A2_3_2.pdf (2002) and Social Science Computer Review, Vol. 20, No. 4, 471-484 (2002)

2 Dillman, Tortora and Bowker. “Principles for Constructing Web Surveys.” http://survey.sesrc.wsu.edu/dillman/papers/websurveyppr.pdf (1998).

3 Couper, Tourangeau, Conrad and Crawford. “What They See is What We Get.” Social Science Computer Review. http://ssc.sagepub.com/cgi/reprint/22/1/111.pdf (2004).