Customer loyalty 2.0

Editor’s note: Bob E. Hayes is president of Seattle research and consulting firm Business Over Broadway.

The Net Promoter Score (NPS) is used by many of today’s top businesses to monitor and manage customer relationships. Fred Reichheld and his co-developers of the NPS say that a single survey question is the only loyalty metric companies need to grow their company. Despite its widespread adoption by such companies as General Electric, Intuit, T-Mobile, Charles Schwab and Enterprise, the NPS is now at the center of a debate regarding its merits.

I will summarize the NPS methodology, including its developers’ claims and opponents’ criticisms. Additionally, I will discuss and study the meaning of customer loyalty as it is measured through survey questions. Finally, I will illustrate how the predictability of business performance measures can be improved when the specificity in the loyalty question and business performance measure is the same.

The NPS is calculated from a single loyalty question, “How likely is it that you would recommend this company to your friend or colleague?” Based on their rating of this question using a 0 to 10 likelihood scale where 0 means “not at all likely” and 10 means “extremely likely,” customers are segmented into three groups: Detractors (ratings of 0 to 6); Passives (ratings of 7 and 8); and Promoters (ratings of 9 and 10). A company can calculate its Net Promoter Score by simply subtracting the proportion of Detractors from the proportion of Promoters. NPS = prop(Promoters) – prop(Detractors)

Reichheld and the other developers of the NPS, Satmetrix and Bain & Company, have made very strong claims about the advantage of the NPS over other loyalty metrics. Specifically, they have said:

- The NPS is “the best predictor of growth” (Reichheld, 2003)

- The NPS is “the single most reliable indicator of a company’s ability to grow” (Netpromoter.com, 2007)

- “Satisfaction lacks a consistently demonstrable connection to … growth” (Reichheld, 2003)

Reichheld supports these claims with research displaying the relationship of NPS to revenue growth. In compelling graphs, Reichheld (2006) illustrates that companies with higher Net Promoter Scores show better revenue growth compared to companies with lower Net Promoter Scores (see left graph in Figure 1). Reichheld sites only one study conducted by Bain & Company showing the relationship between satisfaction and growth to be 0.001.

Startling results

Researchers, pointing out the NPS claims are only supported by Reichheld and his co-developers, have conducted rigorous scientific research on the NPS, with startling results. For example, Keiningham et al. (2007), using the same technique employed by Reichheld to show the relationship between NPS and growth, used survey results from the American Customer Satisfaction Index (ACSI) to create scatter plots to show the relationship between satisfaction and growth. Looking at the personal computer industry, they found that satisfaction is just as good as the NPS at predicting growth (see Figure 1). Keiningham et al. (2007) found the same pattern of results in other industries (e.g., insurance, airline, ISP). In all cases, satisfaction and NPS were comparable in predicting growth.

Other researchers (Morgan & Rego, 2006) have shown that other conventional loyalty measures (e.g., overall satisfaction, likelihood to repurchase) are comparable to NPS in predicting business performance measures like market share and cash flow.

Contrary to Reichheld, other researchers, in fact, have found that customer satisfaction is consistently correlated with growth (Anderson, et al., 2004; Fornell, et al., 2006; Gruca & Rego, 2005).

Cast a shadow

The recent scientific, peer-reviewed studies cast a shadow on the claims put forth by Reichheld and his cohorts. In fact, there is no published empirical evidence supporting the superiority of the NPS over other conventional loyalty metrics.

Keiningham et al. (2007) aptly point out that there may be research bias by the NPS developers. There seems be a lack of full disclosure from the Net Promoter camp with regard to their research. The Net Promoter developers, like any research scientists, need to present their analysis to back up their claims and refute the current scientific research that brings their methodological rigor into question. To date, they have not done so. Instead, the Net Promoter camp only points to the simplicity of this single metric which allows companies to become more customer-centric. That is not a scientific rebuttal. That is marketing.

Similar pattern

Why do commonly used loyalty questions show a similar pattern of relationship to revenue growth? The measurement process behind the loyalty questions pays a key role in understanding the meaning of customer loyalty. First, let’s look at objective measures of loyalty. These metrics have minimal measurement error associated with them. Because these metrics are not subject to interpretation, these objective loyalty metrics have unambiguous meaning. The number of recommendations a customer makes is clearly distinct from the number of repeat purchases that customer makes.

Let us now look at the use of surveys to gauge customer loyalty; customers’ ratings of each loyalty question (e.g., likelihood to recommend, satisfaction, likelihood to repurchase) become the metric of customer loyalty. Even though we are able to calculate separate loyalty scores from each loyalty question (e.g., NPS, overall satisfaction, likelihood to repurchase), the distinction among the loyalty questions may not be as clear as we think. Because of the way customers interpret survey questions and the inherent error associated with measuring psychological constructs (yes, measured in surveys, customer loyalty is a psychological construct), ratings need to be critically evaluated to ensure we understand the meaning behind the ratings. Psychological measurement principles and analyses (e.g., correlational analysis, factor analysis, and reliability analysis) are used to help identify the meaning behind the customers’ ratings.

I set out to compare four commonly used loyalty questions to study the differences, if any, among the questions. The four loyalty questions were:

1. “Overall, how satisfied are you with Company ABC?”

2. “How likely are you to recommend Company ABC to friends/colleagues?”

3. “How likely are you to continue purchasing the same product and/or service from Company ABC?”

4. “If you were selecting [a company within the industry] for the first time, how likely is it that you would choose Company ABC?”

An 11-point rating scale was used for each question. Question 1 was rated on a scale of 0 (extremely dissatisfied) to 10 (extremely satisfied). The remaining questions were rated on a scale of 0 (not at all likely) to 10 (extremely likely). With the help of Seattle research firm Global Market Insite Inc., which provided online data collection and consumer panels, I surveyed about 1,000 respondents (general consumers in the United States ages 18 and older) who were asked to identify and then rate their wireless service providers on the four questions. I obtained objective business metrics, when available, for each wireless service provider; these were annual revenue (2005 and 2006) and defection rates (Q2 2007).

I applied standard statistical analyses that are commonly used to evaluate questions on survey questions. First, the average correlation among the four loyalty questions was very high (r = .87). This finding reveals that each customer responds to the four questions in a consistent manner. That is, customers who are highly likely to recommend the company are also highly likely to be satisfied with the company; conversely, customers who are not likely to recommend the company are also not likely to be satisfied with the company. The same pattern is seen across all pairings of the loyalty questions. Second, a factor analysis of the four questions showed a clear one-factor solution. Factor loadings, essentially representing the correlation between each question and the underlying factor, were all .90 or higher. This pattern of results clearly shows that all four questions, including the “likelihood to recommend” question, measure one underlying construct, customer loyalty.

Less reliable

The NPS developers support the use of a single question to understand customer loyalty. Single-item measures are less reliable (contain more measurement error) than multiple-item measures. A good analogy would be measuring math skills with a single-item math test vs. a 50-question math test. An answer to the single-item test would be a less reliable reflection of math skills than the combined answers to the 50-item math test. Would you want your child’s SAT score to be determined by a single question from the test or the entire set of questions on the test?

Supported by the analyses above, the four loyalty questions can be averaged together to get a more reliable measure of loyalty, what I refer to as the advocacy loyalty index (ALI). The reliability of the ALI (Cronbach’s alpha = .96, high by psychological measurement standards) indicates that there is little measurement error when all four questions are used together. Using the ALI in customer loyalty management is better than using any single question because the ALI aims to provide a more precise measure of loyalty than any of the four questions used alone.

Figure 2 shows that the NPS and the ALI are similarly related to revenue growth2. T-Mobile, Alltel and Verizon, all with high ALI or Net Promoter scores, have faster revenue growth compared to Sprint, which has a lower ALI and NPS.

Specific measures

The predictability is improved when the specificity in the predictor and outcome are the same (Figure 3). That is, specific outcomes are best predicted by specific measures. As an example, an employee’s intention to quit his/her job is a better predictor of whether or not that employee actually quits compared to general measures of employee satisfaction. Conversely, general outcomes are best predicted by general measures.

In the survey, I included another loyalty question, “How likely are you to switch to a different provider in the next 12 months?” When predicting revenue growth, we see that the advocacy loyalty index (general predictor) is better than likelihood to switch (specific predictor) in predicting revenue growth (general outcome) (Figure 4). Revenue growth is impacted by more than just customers’ likelihood to switch. Advocacy loyalty, however, predicts growth better because of its general nature.

When we predict a more specific outcome, we see a different pattern of results (Figure 5). Likelihood to switch (specific predictor) is better than advocacy loyalty index (general predictor) in predicting defection rate (specific outcome). Likelihood to switch is a better predictor because it is specific and targeted to the outcome of interest.

Advocacy loyalty, however, encompasses aspects that are not related to whether customers stay or leave. Companies need to examine their business metrics closely and then select the appropriate loyalty metrics that best match them. Managing important customer outcomes goes far beyond a single, ultimate question.

Overlook disloyal customers

A company, relying solely on the NPS as the ultimate metric, may overlook disloyal customers defined in other ways. In the wireless service provider study, I found that, of the customers who are non-detractors (those scoring 7 or above), 31 percent are still likely to switch to a different wireless service provider. To manage customer relationships to minimize customer defections, the NPS falls short.

Relying solely on NPS to manage customers would result in missed opportunities to save a large number of at-risk customers from defecting. This mismanagement of customer relationships in the wireless industry, where defection rate is a key business metric, can be detrimental to revenue growth. Using Q2 2007 data for T-Mobile USA (Figure 6), it is estimated that, of non-detractors, over 900,000 T-Mobile USA customers are still likely to switch to another provider, with a potential annual revenue loss of over $29 million3!

Not the best predictor

The NPS is not the best predictor of business performance measures. Other conventional loyalty questions are equally good at predicting revenue growth. Reichheld’s claims are grossly overstated with regard to the merits of the Net Promoter Score. They do not address these criticisms about the quality of the research (or lack of) behind their claims.

General loyalty questions, including those measuring likelihood to recommend, measure one general construct, customer loyalty. Consequently, it not surprising that many researchers find similar results across these loyalty questions when predicting revenue growth. Because single survey questions have inherent measurement error, aggregating responses across general loyalty questions (e.g., overall satisfaction, recommend, repurchase, choose again) is a useful way to create reliable loyalty metrics.

Companies should use a variety of loyalty questions to ensure at-risk customers are identified in a variety of ways. How well we are able to predict business performance measures depends on the match between the business metric and the loyalty questions. Specific loyalty questions are useful for predicting specific business outcomes (e.g., defection rate). General loyalty questions are useful for predicting general business outcomes (e.g., revenue). Companies need to do their research to fully understand how different loyalty measures correspond to specific business outcomes. Single, simple metrics are fraught with error and can lead to the mismanagement of customers and, ultimately, loss of revenue.


References

Anderson, E.W., Fornell, C., & Mazvancheryl, S.K. (2004). “Customer Satisfaction and Shareholder Value.” Journal of Marketing, 68 (October), 72-185.

Fornell, C., Mithas, S., Morgensen, F.V., Krishan, M.S. (2006). “Customer Satisfaction and Stock Prices: High Returns, Low Risk.” Journal of Marketing, 70 (January), 1-14.

Gruca, T.S., & Rego, L.L. (2005). “Customer Satisfaction, Cash Flow, and Shareholder Value.” Journal of Marketing, 69 (July), 115-130.

Hayes, B.E. (2008). Measuring Customer Satisfaction (3rd Ed.). Quality Press. Milwaukee, Wis.

Ironson, G.H., Smith, P.C., Brannick, M.T., Gibson W.M. & Paul, K.B. (1989). “Construction of a ‘Job in General’ Scale: A Comparison of Global, Composite, and Specific Measures.” Journal of Applied Psychology, 74, 193-200.

Keiningham, T.L., Cooil, B., Andreassen, T.W., & Aksoy, L. (2007). “A Longitudinal Examination of Net Promoter and Firm Revenue Growth.” Journal of Marketing, 71 (July), 39-51.

Morgan, N.A. & Rego, L.L. (2006). “The Value of Different Customer Satisfaction and Loyalty Metrics in Predicting Business Performance.” Marketing Science, 25(5), 426-439.

Netpromoter.com (2007). Homepage.

Reichheld, F.F. (2003). “The One Number You Need to Grow.” Harvard Business Review, 81 (December), 46-54.

Reichheld, F.F. (2006). The Ultimate Question: Driving Good Profits and True Growth. Harvard Business School Press. Boston.


Footnotes

1 http://resultsbrief.bain.com/videos/0402/main.html

2 When examining each of the four loyalty questions individually, the relationship to revenue growth was the same.

3  Based on 5 percent of 928,068 customers actually defecting, and T-Mobile’s ARPU of $53.