Editor’s note: Jerry W. Thomas is the president and CEO of Arlington, Texas-based marketing research and consulting firm Decision Analyst Inc.

One magical question (the so-called ultimate question) and one simple formula (the Net Promoter Score or NPS) are the ultimate measures of customer satisfaction and the ultimate predictors of a company’s future success.

These were the assertions in the book The Ultimate Question by Fred Reichheld. The same assertions were expanded upon in The Ultimate Question 2.0 by Fred Reichheld and Rob Markey. The authors argued that the ultimate question and the Net Promoter Score (NPS) “drive extraordinary financial and competitive results.”

Many chief executives have read the books about the ultimate question and NPS or heard the measures discussed by other executives and presented during conferences. From the books to publicity, conferences and favorable press, NPS has elevated to almost mythical status, the Holy Grail of business success.

But is there really one ultimate question? Is NPS really the ultimate predictor of success?

According to Reichheld and Markey, the ultimate question is: “How likely is it that you would recommend (product, service, company) to a colleague or friend?” The answer’s options fall on a 0-to-10 scale, with 10 defined as extremely likely to recommend and 0 defined as not at all likely to recommend.

The NPS is calculated from the answers selected on the 0-to-10 scale. Ratings of 10 and 9 are grouped together and are called promoters. Ratings of 8 and 7 are called passives and those who give a rating of 6 or below are called detractors. The NPS formula is the percent classified as promoters minus the percent classified as detractors.

Here are some observations about the ultimate question and NPS. Let’s start with the positives and then move to the negatives.

Positives

  • The question itself is a good one. It’s clear and easy to understand.
  • The 0-to-10 rating scale is widely used and generally accepted as a sensitive scale (i.e., it can accurately measure small differences from person to person).
  • The labeling of the scale’s endpoints, very likely = 10 and not at all likely = 0, are clear and easily understood.

Negatives

  • Ambiguity. The individual numbers on the 0-to-10 answer scale (with the exception of the endpoints) are not labeled or defined. What does someone’s answer really mean? Are ratings of a 7 or an 8 positive, neutral or negative? Some people tend to give high ratings, while others tend to give low ratings, especially when most of the points on the scale are not precisely defined.
  • Lost information. The NPS formula is imprecise due to lost information. Here’s how NPS loses information:
    • The NPS counts an answer of 10 and an answer of 9 as equal. Isn’t a 10 better than a 9? This information (that a 10 is better than a 9) is lost in the formula.
    • If someone answers an 8 or a 7, the answer simply doesn’t count – it’s not included in the formula. So all of the information in an answer of 8 or 7 is lost and the sample size is reduced because these individuals are not counted. A smaller sample size increases statistical error.
    • The NPS counts an answer of 6 the same as an answer of 0, 5 the same as 0 and so on. Aren’t answers of 5 or 6 much better than 0? Most of the information in answers 6, 5, 4, 3, 2 and 1 is lost in the NPS formula because it counts all 0-to-6 ratings as equal to 0.
    • In effect, the NPS converts a very sensitive 0-to-10 answer scale into a crude two-point scale (promoters and detractors) that loses much of the information contained in the original answers.

A better measure than the NPS is a simple average of the answers to the 0-to-10 scale, where a 10 counts as 10, 9 counts as 9, 8 counts as 8 and so on down to 0. This results in an average score somewhere between 0 and 10 that contains all of the information in the original answer scale. No information is lost!

  • Misnomers. The terms Promoters, Passives and Detractors are curious. If someone answers with a 10 or a 9, it would seem defensible to classify them as Promoters (i.e., people highly likely to recommend your brand or company). Calling ratings of 8 or 7 Passives is highly questionable. An 8 or a 7 may be pretty good and one might conclude that the individuals who give those ratings are also likely to recommend your brand. While the Passive name is a misnomer, the real sin is the term Detractor. The answer scale does not provide a place to record that the respondent is likely to recommend that people not buy your brand. That end of the scale is labeled as not at all likely to recommend. Not likely to recommend is a far cry from being a Detractor (i.e., someone who actively tells friends not to buy a brand or someone who makes negative remarks about a company). Detractor is a misnomer.
  • Recommendation metric. The likelihood that someone will recommend a brand or company varies tremendously from product category to product category. Someone may recommend a car dealership, restaurant or golf course (high-interest categories) but not mention a drugstore, gas station, bank or funeral home (low-interest categories). If customer recommendations are not a major factor in your product category, then the NPS might not be a worthwhile measure for your brand. A sound strategy is to tailor the customer-experience questions to your product or service and to your business goals. Use multiple questions that measure customer experience relevant to your company. Don’t buy into the illusion of a universal truth or the promise of an ultimate question.  Don’t fall for simple answers to complex questions.

If the ultimate question is not really the ultimate question, then what are some best practices to create better questions to measure customer satisfaction?

Questionnaire design

The first rule is, do no harm. That is, your attempts to measure customer satisfaction should not lower your customers’ satisfaction. This means that questionnaires should be simple, concise and relevant. Use very simple rating scales (yes/no, excellent/good/fair/poor). Short word-defined scales (e.g., excellent, good, fair, poor) are easy for customers to answer and the results are easy to explain to executives and employees. Moreover, short, simple scales work well for surveys taken on PCs, tablets and smartphones. Long, complicated scales should be avoided.

The questionnaire should almost always begin with an open-ended question to give the customer a chance to tell his or her story. An opening question might be:

“Please tell us about your recent experience of buying a new Lexus from our dealer in northern Denver.”

This open-ended prompt gives the customer the opportunity to explain and complain. It communicates that you are really interested in the customer and his or her experiences. The open-ended question also conveys that your company is really listening. Then you can ask rating questions about various aspects of the customer’s experience but keep these to a minimum.

Most satisfaction questionnaires are much too long. If you want to include a recommendation question, you might consider something similar to Figure 1 (remember: the wording must be tailored to your product, company and situation). 

With this question-and-answer scale, it’s possible to calculate a Net Recommendation Score as shown in Figure 2.

The recommendation question’s well-defined answer choices and the formula to determine the Net Recommendation Score are designed to give you a more precise measure of the net influence of customer recommendations than the ultimate question and NPS.

In summary, the ultimate question is simply another question. It has no special meaning and no prescience of success. The NPS is not a magical formula but a flawed formula that loses much of the information in the original answer scale. If you like the concept of measuring the influence of customer recommendations, you might want to consider an approach such as the Net Recommendation Score. But please remember that it is only one measure – you will need other questions to fully measure and understand the customer experience.