Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

By the Numbers: When satisfaction scores go flat



Article ID:
20140106
Published:
January 2014, page 24
Author:
Peter Gurney

Article Abstract

Peter Gurney offers some dos and don’ts for dealing with a stubborn trend line.

Editor's note: Peter Gurney is senior director, VOC solutions, at Seattle-based NetReflector Inc.

Once a slow and expensive process, collecting feedback from customers is now quick, simple and relatively cheap with the widespread availability of online survey tools and reporting systems. Companies can collect ratings and feedback at every point of contact, including phone calls, e-mails, Web visits and point-of-sale purchases. In addition, managers can view survey results instantly instead of waiting weeks or months to find out what their customers are saying.

This information is a valuable and necessary component of any voice of the customer (VOC) program. But if you’ve been collecting survey results for a while, you’ve probably run into a situation that many organizations face: flat trend lines. Once the easy wins are behind you, any upward movement in the overall ratings becomes increasingly difficult to achieve. This wouldn’t be a problem if you could confidently say that your organization had reached a state of customer experience perfection, but in most cases, employees and managers are painfully aware that there is still plenty of improvement to be made.

The problem with flat trend lines isn’t simply that they suggest a lack of progress. It’s also that they’re boring. It’s difficult to keep stakeholders interested and motivated when they see the same scores month after month. Many customer-experience initiatives have stalled when satisfaction ratings reach a plateau.

Flat scores are actually just a sign that the VOC program needs to evolve. There are various actions that can be taken to push the program along and different organizations approach the challenge in different ways. As a start, we offer a few dos and don’ts:

Do: Bring other metrics to the foreground. Satisfaction ratings (or NPS or however you’re keeping score) are not meant to be an end in themselves. They are intended to reflect customer attitudes and experiences as a means to achieving better business results. Eventually, satisfaction scores need to become less prominent as other success measures take the lead. Depending on what the goals of the program are, various operational and financial metrics may be brought forward, including complaint volumes, retention rates, new accounts, customer spend and average cost-to-serve. This doesn’t mean that satisfaction ratings disappear; they should continue to serve as an important indicator of the customer relationship. But as the Chinese proverb goes, “When the finger points at the moon, the fool looks at the finger.”

Don’t: Change the scale. Some organizations fall into the trap of blaming the messenger, assuming that a different scale or manner of asking about satisfaction will change the result. Here are some hard truths:

  • Bigger satisfaction scales don’t give you more precision. As a practical matter, all satisfaction analyses tend to break down into three buckets: negative, neutral or positive. Whether you’re using a five-point scale or a 100-point scale, you’ll still be looking at those three categories in the end. 
  • Using multidimensional indexes may not help, either. Combining and weighting several metrics, like overall satisfaction, willingness to recommend, likelihood to repurchase, etc., sounds scientific and gives the illusion of greater precision. Unfortunately, these formula-based indexes are seldom better predictors of business performance than simply tracking overall satisfaction.

Do: Focus more heavily on open-ended responses. Numbers are nice because they’re easy to analyze and display. Words, on the other hand, are messy and analyzing them is labor-intensive. As a result, it is common for VOC researchers to severely limit the use of open-ended questions on their surveys. It is also common to find that the research team is sitting on a pile of unanalyzed comments, hoping they will eventually have the time to make sense of them.

Although customer comments are indeed more difficult to analyze and report on than ratings, it is often in the comments that the richest and most actionable information can be found. Companies that have hit a wall with their satisfaction ratings may want to look at redesigning their surveys to better allow customers to tell their stories in their own words. This may require additional work but it will ultimately provide more powerful and actionable information.

Don’t: Shrink the scope. Satisfaction surveys can become overly focused on the needs of a specific user group, often at the expense of providing in-depth information about the customer relationship. For example, post-transaction surveys may be used primarily for coaching and rewarding call agents and other frontline service personnel and over time become shortened to exclude any questions that are not directly related to the customer’s interaction with the agent. But this narrowly-scoped data leaves out important information about the customer’s overall experience and relationship with the company. In general, voice-of-the-customer programs should include both in-depth relationship surveys and transaction-based feedback and the transaction feedback should capture information about the entire experience, not just the performance of the service agent.

Do: Segment the results. Rather than tracking an overall satisfaction score for the company, it is often more productive to break the scores out by relevant customer groups and monitor them separately. Different groups may have different satisfaction criteria, as well as different expected ranges of satisfaction. For example, business travelers typically give lower satisfaction ratings than leisure travelers, even though they may, on paper, appear to be more loyal to a specific hotel brand or airline. Understanding how different groups are best satisfied and what the relevant ranges of their satisfaction ratings are will allow you to focus your improvement efforts more effectively.

Don’t: Settle for “good enough.” If satisfaction ratings have reached a plateau, it may be tempting to rationalize by claiming that further improvement is unnecessary or unaffordable. But this is seldom true. Executives at companies with superior service levels, such as Nordstrom, are frequently heard to use phrases such as, “We’re still far from perfect,” “We have a long way to go” and “We’re always working at getting better.” If scores are flat, it’s time to work harder, not to relax.

Do: Recruit new stakeholders. As VOC programs mature, they often apply customer feedback in new ways to meet the needs of an expanding base of internal clients. While VOC may initially be used for service recovery, frontline coaching and satisfaction monitoring, over time the information can be systematically applied to support product innovation, process improvement, vendor relations, training and communications content and other important organizational needs. At the same time, the VOC team may evolve from an analytical and report-generating group to an internal consulting organization, working closely with a wide range of stakeholders to help them advance their business objectives.

Continually evolve

The main point to keep in mind is that customer satisfaction and VOC programs are not meant to be static. As the organization becomes comfortable with the process of measuring and sharing customer feedback, the program must continually evolve by incorporating new measures, serving new stakeholders, and making more effective use of the information.

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

RIVA COURSE 201: FUNDAMENTALS OF MODERATING
April 23-25, 2014
RIVA Training Institute will hold a course, themed 'Fundamentals of Moderating,' on April 23 -April 25 in Rockville, Md.
RIVA COURSE 303: ADVANCED MODERATING
April 28-30, 2014
RIVA Training Institute will hold a course, themed 'Advanced Moderating,' on April 28-30 in Rockville, Md.

View more Related Events...

Related Articles

There are 2097 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

Use these five Web-based approaches to shrink your research timelines, costs
This article outlines the pros and cons of a handful of online research techniques, from text-message-based surveys to Web-assisted phone interviews.
Qualitatively Speaking: A quantifiable difference
A moving personal experience further convinced the author of the impact that qualitative research can have.
How to green your research
The author offers tips for marketing researchers on how to make the act of gathering data less burdensome on the environment. Everything from changing travel habits to going paperless is covered. Includes a sidebar on how to green a focus group facility.
No offense: Do Hispanics rate products higher because they're Hispanic?
Survey crosstab software for your desktop computer
Using a desktop computer for crosstabbing can help researchers turn out high-quality custom tables for your management reports quickly and easily. Peter A. Sharpe, vice president Irwin P. Sharpe & Associates marketing consulting firm, explores the benefits and steps needed to take companies into the computer era.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

TURF excel-based simulator
04/17/2014 by Giovanni Olivieri
XLSTAT Turf
04/10/2014 by Felix Schaefer
TURF excel-based simulator
03/25/2014 by Werner Mueller
Question writing in which person--I or You?
03/17/2014 by Shalan Gilmeister
I would like Turf Macro too!
03/06/2014 by Neelam Hinduja

View More