Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

Data Use: An analysis of the impact of survey scales

Article ID:
November 2013, page 22
Adam S. Cook

Article Abstract

Adam Cook examines many options for survey scales and offers some research-on-research that explores the effects of various scale point ranges.

Editor's note: Adam Cook is director of research and development at Pilot Media Inc., Norfolk, Va.

Survey scales are important because they help differentiate the degree to which people feel toward certain questions. Yes-or-no responses are not always an option in consumer perceptions and feelings. But what’s the best numeric scale to use in analyses and what scale is easiest for the respondent to interpret?

If it’s a paper or phone survey, sophisticated and/or easily-misinterpreted scale questions can be a real challenge. A scale can’t have too many radial points (e.g., you don’t want to ask on a scale of 0 to 100, where 0 = x and 100 = y) or too many descriptors defining each point (e.g., completely, somewhat, rarely, never, neutral, etc.). If we don’t include a number of options, our ability to analyze differentiation in responses becomes more limiting (e.g., a 1-to-3 scale doesn’t give us a whole lot of information to differentiate between responses). See Figure 1 for varying examples of scale questions.

Thankfully, interactive online surveys exist and they have some real untapped potential for finding the sweet spot between maximizing participation and analytic reliability or differentiation. Given the challenges presented in traditional collection methods, I’m going to focus on the ideal for interactive online scales.

Even vs. odd number of scale collection points. The options are many: 1-to-10 (10 options) or 0-to-10 (11 options); 0-to-7 (eight options) or 1-to-7 (seven options); 0-to-5 (six options) or 1-to-5 (five options). I’ve usually heard the value of even-numbered scales is that they require respondents to choose or lean toward one of the extremes on the scale presented. I understand the desire to acquire definitive feelings but the truth is that some people have no feeling, one way or another, toward certain things (an inconvenient reality to some decision makers). I believe eliminating the neutral option interjects bias into the results and ultimately the analysis. I feel it’s “extremely important” to use odd-numbered scales, ones that have the option to select a true mid-point of neutrality. But sometimes you have to deal with the scale that may be given to you to analyze. We’re not always in a position to choose.

Number of scale points: three, five, seven, nine, 11, 13 . . . ? Well, we know one scale point isn’t an option and I’m ruling out even-numbered scales. Here’s what I do know (from the book Marketing Research: Methodological Foundations): “Research indicates a positive relationship between the number of scale points and reliability.”

Having a large number of scale points is important for analyses. If you’re using a radial point scale collection method, having a scale exceeding 0 to 10 or 1 to 11 (11 points) can look overwhelming. With a radial scale point display, I would recommend not exceeding 11. It’s important to note: If you are going to display the numbers on the scale, your maximum scale should probably be 0 to 10. Many books have been written on the importance of this scale and its ease of translation to potential respondents. Zero is typically defined as bad and 10 is usually associated with the highest of marks. A scale of 1 to 11 wouldn’t work because 11 is not commonly associated with perfection or rankings. Sometimes 1 is considered the best as well without a clear definition (very similar to the defining an ace as high or low in card games). This lack of a clear association may create confusion.

Here’s where an interactive scale can help us overcome participation and visual fatigue. The use of sliding-scale displays enables us to remove the need to display numbers (see options five and six in Figure 1 for visual examples of sliding scales). With the numbers coded into the background, you need not worry about confusion or overwhelming radial point displays. In fact, the scale coded into the background is ultimately up to the analyst developing the interactive survey. The sliding scale can even get us a scale greater than 11 points to help maximize reliability. Technically we would be limitless, but 0 to 1 billion sounds like a bit much. If you don’t like the idea of 101 points spanning 0 to 100, simply create 101 points spanning 0 to 10. It’s simply moving the decimal point to the tenths (e.g., 0, 0.1, 0.2, 0.3 … all the way up to 9.8, 9.9, 10). I have yet to see this offered as a scale option but I’d love to have this capability. This brings us to our last quandary of where to start our scale.

Starting a scale with a 1 or a 0. If you’re displaying numbers, it’s actually pretty arbitrary in terms of what you use as long as the numbers are clearly defined. It’s ultimately the number of scale points that dictates the strength of your analysis. A 1-to-10 scale is essentially the same as a 0-to-9 scale or, as crazy as it may sound, a 2-to-11 scale. I would hope it’s self evident that if you’re using a low-end descriptor of “not at all, none, never” or anything that’s a definitive null, then 0 is the best number to start with on the scale. I really don’t have a case for using 1 and I’m not completely sure why scales do start with a 1 for display or analysis purposes. Until I hear a solid case or rationale for using 1 to start the scale, I’m going to stick to 0 when given the option. Data collection with 0-to-10 also has the easiest conversion to percentage analyses.

Obviously the choice and preference in scales used is ultimately yours but it is worth considering the implications of your choice. Figure 2 represents my preferences.

Scale analysis options and pitfalls

Why median scores are a bad idea. Medians are good for analyses that incorporate extreme outliers in data. Household income is probably the best example of when to use a median in analysis. One billionaire can make an average income analysis skyrocket. Since we’re analyzing a scale, there’s a distinct and established range. The percent differences can be significant from one data set using averages in analysis versus another using medians. See Figure 3 for a random example of 100 respondents analyzed using medians versus averages.

The average analysis was 19 percent higher than the median analysis in 2011 and 11 percent lower in 2012. Conversely, when you analyze the change from 2011 to 2012, we see a 12 percent increase in average scoring, while the median analysis shows a 50 percent increase. If this isn’t enough to put the nail in coffin of median analyses on scale questions, I don’t know what is.

Why rounding averages is another bad idea. In Figure 4’s example of two different average scores, where Group A = 4 and Group B = 5, Group B’s average score is 25 percent higher than Group A’s. What if Group A’s average was actually 4.49 and Group B’s was 4.51? The difference would be minimal. What if Group A’s average was actually 3.50 and Group B’s was 5.49? Group B’s average score would be 57 percent higher. The range in difference is anywhere from 0 percent to 57 percent. This would indicate to me that’s it’s a bad idea to round results in analyses. See Figure 4 for examples of the impact in rounding averages.

Why percentage groupings can actually misinterpret results. I’ve seen reports and analyses that group numbers together from a scale question. See Figure 5 for an example of grouping 7s or higher on a scale of 0 to 10. All of the examples used in Figure 5 have 25 percent scoring a 7 or higher.

It didn’t occur to me until recently how inaccurate these groupings can be in analysis. When you start creating a number of different scenarios, some random and some extreme, the variations are a wake-up call. In Figure 5, Example 3 Minimum and Example 4 Maximum share the same 25 percent scoring a 7 or higher, but Example 4’s average score is 19 percent higher than Example 3’s.

At its most extreme in scoring, shown as “Extreme (-)” or “Extreme (+),” the maximum can be 300 percent higher than the minimum. See Figure 6 for additional analysis comparisons in averages (non-rounded) versus the grouping method for 50 percent and 75 percent scoring 7 or higher on a scale of 0 to 10.

Converting the scale analysis into percentage representation. Scale analysis can be converted into a percentage analysis (as already seen in Figures 5 and 6). If you’re using a 0-to-10 scale, simply moving the decimal point converts your average scores into percentage representation. At its lowest, the entire sample giving 0s equates to 0 percent and at its highest, the entire sample giving 10s equates to a 100 percent. If you’re using a scale other than 0-to-10, you’ll need to use a less-obvious conversion formula. The conversion formula for varying scales other than 0-to-10 can be found in Figure 7. For an example of scale impact and conversion on a scale of 1-to-10, see Figure 8.

Not all analyses are equal. In fact, some can be downright deceptive. My advice: When you can, use 0-to-10 scales, conduct average (non-rounded) analyses and convert to percentage analyses when needed.

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

November 3-5, 2014
IIR will hold its international shopper insights in action event on November 3-5 at the Sheraton Grand in Edinburgh Scotland.
November 3-5, 2014
RIVA Training Institute will hold a course, themed 'Qualitative Analysis and Reporting,' on November 3-5 in Rockville, Md.

View more Related Events...

Related Articles

There are 2094 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

Transform your tracking studies: Take them off autopilot to increase their impact and ROI
Tracking studies are a staple in the marketing research arsenal. Drawing from a larger study of Quirk’s readers and in-depth interviews with client-side researchers, Brett Hagins offers tips on making them more effective.
Qualitatively Speaking: Ethnography goes digital
While traditional in-person ethnography continues to have its place in the product development process, digital ethnography - in which product users blog about their experiences - offers its own set of advantages.
Seven signs of fallout from the information explosion
As a nation, we’ve shown ourselves to be world masters at generating information, though right now we are at war with information. This article details seven signs of information fallout and discusses our challenge to make more intelligent and productive use of information through new and improved processing, linking, visualization and management techniques.
Satisfaction study is vehicle for Minnesota Departmentof Transportation to test question order
The Minnesota Department of Transportation found that changing the question order in a long-time study had some interesting and ultimately beneficial effects.
Regression analysis is a key to actionable results in CSM
The main objective of any customer satisfaction survey is to isolate key areas for improvement. This article discusses regression analysis, a technique that can pinpoint the areas that have the greatest impact on customer satisfaction.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries


Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

Hi Giovanni
10/17/2014 by Dohyun Kim
Referencing another survey to provide context on a question
09/12/2014 by Karina Santoro
06/06/2014 by Monika Kunkowska
TURF excel-based simulator
04/17/2014 by Giovanni Olivieri
04/10/2014 by Felix Schaefer

View More