Editor’s note: Dennis Murphy is vice president of the technology practice at Directions Research, Cincinnati. Chris Goodwin is a vice president at Directions Research. This is the second of a three-part series of articles. Part I appeared in the July issue. Part III appears in the October issue.

In the July issue of Quirk’s we identified a number of problems endemic to customer satisfaction. In this second of three articles, we discuss the consequences of those problems. Part III, appearing in the October issue, will suggest potential solutions.

In the previous article, we put the problems into four categories: insubstantial theory, haphazard execution, measurement confiscation and inappropriate application. Each category of problem poses serious dangers for an organization that leans heavily upon customer satisfaction for guidance in improving its business. Using a framework of satisfaction-related assertions, we’ll again take a look at those categories to illuminate the major consequences of the mindless or rote pursuit of increasing customer satisfaction measures.

Insubstantial theory

Assertion 1: Satisfaction is an attitude, not a behavior, and therefore a spurious goal in and of itself.

Previously we argued that the theoretical foundations underpinning customer satisfaction were weak or at least not fully thought-through before customer satisfaction rose to prominence in the measurement world.

Yes, satisfying customers overall is generally a good thing, but it is a means, not an end. We’ll get to what we consider to be the right end a few points from now. For the moment though consider the following scenario.

Imagine reading a baseball game recap extolling the talents of all of the participants: Repurchase laid down a great bunt, Overall Satisfaction hit one out of the park and Recommend gave up two hits in relief.

These are all great things to know, but who won? I may be interested in how the players performed, but that information is relatively empty of value sans a final score. First, did we win or lose - did we make more money or less? Then - and only then - does an individual’s contribution matter. Now we can begin to explain the victory or defeat - the true dependent variable - in terms of contributors, and that’s precisely the proper role for the most common satisfaction measures. They are intermediate variables at best; their role is primarily explanatory.

The consequence of forgetting this is that it leads companies to pursue improvements that may be totally inconsequential to the bottom line.

Observation: If the main purpose of your business is to be popular, you may soon be looking for a new business.

Assertion 2: Satisfaction always requires context.

Not only is satisfaction not the desired analytic centerpiece (dependent variable), sometimes it’s not even critical support. Consider:

•   If there are no alternative choices available to you, at least in the short term, satisfaction is pretty much irrelevant. You’re captive.

•   If the costs (monetary, physical or emotional) involved in switching are prohibitive, again satisfaction may not be material.

Before agreeing to view satisfaction as even a driver of your business, you need to verify that the assumption is actually valid. As a consequence, while pursuing satisfaction, your company may be ignoring something that is truly required to succeed.

Observation: Satisfaction may matter - then again it may not.

It is not our intent to be dismissive of satisfaction or suggest that you be. It’s simply responsible to understand the role it truly plays in your business.

Assertion 3: There is one business measure that matters most and it’s not recommendation!

We have nothing against recommendation - we even think it’s often a very valuable input - but to us the one measure that really matters most is market performance. We’ll use market share as the surrogate but recognize that this represents the stream of profit created by an organization. Now that’s a measure that we can all cheer on.

Here’s the logic chain that should begin to align the players in the proper roles: We desire to make profitable sales and we do that by increasing the number of customers that choose our product. That is the element we should be trying to predict; not overall satisfaction, not repurchase and not recommendation, as they’re all simply attitudes, albeit potentially very powerful attitudes. The question then becomes how those attitudes can help drive choice.

Satisfaction measures join a myriad of other potentially explanatory variables to help understand sales. And, too, there is certainly a hierarchy of how sales are measured. It can be as elementary as a constant-sum question (simulated behavior) embedded in a questionnaire, or as sophisticated as a direct connection (reported behavior) to the customer database. The point here, however, is that regardless of methodology, we’re no longer addressing some aggregate and correlative relationship but striving for resolution at the individual customer level.

The consequence of sole or high reliance upon repurchase is measurement myopia. Microscopes aren’t designed to provide holistic views.

Observation: We don’t track satisfaction to enhance satisfaction; we track satisfaction to enhance sales.

Haphazard execution

Assertion 4: Creating conventions is conventional thinking.

We posed in the last article that poor execution of the hypotheses around customer satisfaction could lead to consequences that augment those caused by the insubstantial theory underlying customer satisfaction.

Early on it became clear to many organizations that overall customer satisfaction provided an incomplete paradigm. Accordingly, many attempts have been made to “improve the system.” Here are some of the most familiar:

A pioneering effort is referred to as the 3M formula. It is the sum of three measures: overall satisfaction, repurchase and recommend (five points each for a potential total of 15 points). That this is a better predictor than a single measure (the two complements do offset some of the shortcomings of overall satisfaction alone) is generally the case.

Fred Reichheld is revered by many as the thought leader in customer satisfaction, as well he should be. When he says that recommend is “the measure that matters most” we believe him, as long as the measurement set is within the realm of customer satisfaction itself. Given that it isn’t subject to as many operational caveats as overall satisfaction, this makes sense. But why would you want “just one measure” when the network of information is so much more powerful. And do you really believe that an attitude can supplant behavior (sales)?

Both of these approaches share two significant shortcomings. First they suggest that some kind of “data transformation” equals “data transubstantiation” - that the data transforms into something more meaningful, not just more easily remembered. Second, while they do begin to break from the myth that satisfaction measures are an end unto themselves, their attempts to associate these measures with sales occurs at the aggregate level. Of course, they will correlate reasonably well in aggregate, but that’s a far cry from where the problems lie. They must address customer-level predictions.

Jan Hofmeyr has produced the defining work on loyalty and created what for all appearances is a better mousetrap, and his approach does begin to recognize that the sanctity of the information lies at the customer, not aggregate, level. He may even have discovered the “Northwest Passage” but the problem here is that the algorithm remains proprietary.

What truly is called for is an account-level predictive system that is open and accessible to all.

The consequence of conventional thinking is that attention to customer satisfaction can do as much harm as good.

Observation: Making diamonds from coal takes a lot of squeezing - and usually a lot of coal. Seek diamond mines.

Assertion 5: Performance is not an absolute measure.

That baseball team of ours played a triple-header yesterday. They scored two runs in the first game, six runs in the second and seven in the third game, for an average score of five. Anything above average we consider “good,” so it looks like our team is two for three and actually improved over the course of three games.

In reality, we lost all three games. Here again we have a kind of “missing the measure that matters most” problem based upon data reduction rather than data enrichment. Instead of expanding our horizon to incorporate competitive references, we contracted our assessment to a single, limited measure. This is ludicrous when applied to baseball, why not then to business as well?

If satisfaction does mean something, then acting on that meaning certainly is not about simply increasing your score every year. It demands a competitive context. In the context of the ballgame, if the other team scored two runs a game, we’d be undefeated, but if they had scored eight runs a game, we’d be winless. If your best customer gives you “a seven this year and an eight next” is that a good thing? Well, yes, unless your primary competitor improves from a six to a nine.

Using internal products or categories to benchmark is also spurious. If your company rewards Product A for an 8.2 performance more than Product B for a 7.8 without benefit of a competitive context, the reward may be an erroneous one. The more a product category is subject to complexity, the lower the scores all brands within that category receive. For instance, in IT the 8.2 might be a low hardware category score while the 7.8 could well be an outstanding software category performance. Internal comparisons are easy but they’re also often inappropriate. (Keep this fact in mind when reading the final point.)

The consequence is that satisfaction scores without a valid, competitive context provide ambiguous guidance to decision makers at best and completely wrong guidance at worst.

Observation: Markets are based upon competition. Why, then, shouldn’t satisfaction be too?

Assertion 6: Methodology drives results.

How surveys are conducted always has a sizable impact upon the results received. Here are some things to consider:

•   If the work is done off of known customer lists of an organization, response rates are quite good (and research is cheap). That said, the competitive picture is likely to be poorly or inaccurately represented. If the sponsoring company is identified to the respondent, rates go up but then so does the possibility of experimenter demand: if they know that they’ll be identified to you, they often exaggerate pleasure; and if safely blind, then perhaps displeasure. Also, you’ll miss those important customers residing outside the database.

•   If the work is done double-blind (you don’t know who is responding and they don’t know who is asking), it is always the most objective. It is also the most expensive because response rates are lower and it may take a ton of sample to fill the brand quotas. If your company is not identified, response rates are always lower than if they know who you are - but they are generally free of the most obvious biases.

For tough-to-find users, some research providers fill quotas by deploying “non-random” recruitment. This can result in extraordinary scores. If users are scarce and you must use unusual recruitment tactics, such as user groups or communities, a brand will receive abnormally positive ratings simply because of exclusivity. Large brands have a fairly normally distributed user group; niche brands almost always skew positively. It’s for “those in the know.”

Customer satisfaction seems so intuitive that anyone can manage a program but if we are able to demonstrate anything through this series of articles, it is that managing a customer satisfaction program demands real expertise, not a weekend of training for the executive put in charge. 

The consequence of the amateur approach is that smart executives can easily be led to the wrong conclusions by problems that are invisible in the reporting of customer satisfaction scores.

Observation: Without extreme vigilance, technical incompetency goes unexposed.

Measurement confiscation

Assertion 7: Customer feedback is a gift, not an obligation.

Measurement confiscation refers to the hijacking of customer satisfaction instruments for other goals. It’s not, as it may seem, a harmless and more efficient use of scarce resources.

In recent years, every department within organizations has appealed for a slot in the customer satisfaction survey. The reason why will be discussed in the final point. There is a very fundamental flaw in this thinking though. It presumes that your customer has the interest and the capacity to dissect your business at a level of detail emulating your own. This capability is rare and any interest on the customer’s part in doing so is even rarer.

Few things matter to your customers the same way they matter to you - a very common and human condition. Egocentric thinking is reflected in most every group. Brand managers believe that “cool and crisp” might be received differently than “crisp and cool”; procurement departments desperately (and generally futilely) want to be “a big deal” and discover that their changes in policy drive real changes in revenue; and sales executives want to fine-tune their smile and handshake as if it’s their personal charm and not the quality of their company’s products that makes sales.

Observation: Company departments and individuals have no inalienable right to be measured. Contributions must be distinguishable and relevant and then customers need care enough to actually respond to a survey.

The sad truth is that life is never as much about “us” as we believe it to be. In the business environment customers aren’t product designers, they’re product choosers. They generally can’t detect Yakima Valley hops or quote monitor resolution. They just “know what they like and buy it.” Think ambience over analytics.

The consequence of asking too much of respondents is that not only do you get distracting findings on measures that don’t matter, you also bore and fatigue your respondents to the point where they can’t give you accurate feedback on the measures that do matter.

Observation: Customers focus more upon what comes out of a product than what goes into it - as well they should. It isn’t about what the product does so much as what the product does for them.

Bonus observation: The paradox is that the folks who least understand that it’s the “end” that matters are often the folks charged with fixing the “means.”

Assertion 8: Respondents answer best what matters most - to them!

A favorite hotel chain of one of the authors asks guests to rate it on 50 different items - 45 of which are immaterial to a guest’s experience. What’s worse, they then ask guests to grade competitors that the author seldom if ever stays at on the same battery. This writer now understands why it’s called a battery because that’s what I feel the victim of when the survey is finally complete. More often than not I drop out without completing.

A sure way to lose survey respondents is to ask them to do exercises that are either boring or of no relevance to them. Customer satisfaction work, often guilty here, queries customers ad infinitum on issues that have little or no consequence to them. If the survey is too long or too boring, respondents will either drop out early or give quick, random input. The former is bad but the latter is worse. As a consequence, we incorporate garbage into supposedly meaningful scores.

Observation: Most surveys pursue what the client expects to hear at the expense of what the respondent actually has to say. This is an enormous missed opportunity.

Inappropriate application

Assertion 9: Well-meaning associates will inappropriately use your customer satisfaction data if you don’t prevent them. Doing so is your responsibility.

The inappropriate application of customer satisfaction, as we have previously written, stems from the politics of customer satisfaction, which has changed the goal from pleasing the customer to pleasing the organization. This leads us to our final two consequences.

Somewhere over the past few years, in lemming fashion, organizations have completely flipped the orientation of satisfaction data. What originated as a method to better satisfy the customer lives on today as a performance measure of the organization itself. Sadly, most customer satisfaction systems care far more about parsing out internal credit and/or blame than truly capturing the relationship between the brand and its customers. And, adding insult to injury, they don’t even parse very well.

The design of most customer satisfaction work is largely an analog of how the organization operates. The rationale goes something like this: The contribution of individual departments can be teased out using customer satisfaction scores. Then, each individual department can work independently to do its part to increase overall scores. Collecting this data is, of course, presumed to be a turnkey operation. In reality it’s anything but. As compelling as it might be to quiz customers on each and every component of their experience, the sad truth is that they care very little about what you do and a great deal about what they get. Either we’ve forgotten that or don’t know how to explain it to top management.

Think about it: customers do not owe any organization feedback on their performance. It is either a gift that customers bestow upon you, or a service they perform for some gratuity or level of remuneration. Your customers don’t like taking surveys any more than you do. It’s a job and every job has a cost associated with it. The consequence of misappropriation is that you often get respondents who aren’t your best customers providing feedback on issues that are not important to most customers, and that may have nothing to do with why they buy your products.

Observation: Who took the customer out of customer satisfaction?

Assertion 10: The inclusion of customer satisfaction in compensation is well-intentioned but misbegotten.

The “white flag” of customer satisfaction occurs the day it becomes a part of any organization’s scorecard or dashboard. While this move seems so logical and appealing, it always has devastating effects. Everyone wants in on the measurement action. A rising score may entitle you to some additional slice of the bonus-pool pie. So what’s wrong with that?

Observation: The goal of customer satisfaction is thoroughly corrupted when it transforms from the altruistic service of better pleasing customers to the unfettered purpose of serving oneself.

Every game known to organizations kicks into play. We’ve seen normally responsible company executives coerce customers, hide sample and dispute scores and research methodology even when they know absolutely nothing about research. Here is an almost verbatim conversation we had once with an executive whose bonus depended on the customer satisfaction score:

Executive: “I don’t have a Ph.D. in research, but I went to my company’s training session on customer satisfaction and I know that is not how that question should be asked.”

Researcher: “I do have a Ph.D. in research, I designed the training session you attended, and I can assure you that is exactly how the question should be asked.”

Executive: “Well, then, you must have made a mistake because ‘The Score’ could not possibly have gone down this year after all the hard work we did.”

Observation: Sadly, the analyst who once was a partner to decision-making executives is now the police.

Finally, as if these issues weren’t bad enough, these same executives who depend on customer satisfaction scores for their bonus often underfund the customer satisfaction research. The samples are almost never large enough to provide confidence intervals smaller than the goals which are set. In other words, organizations often set customer satisfaction goals that can be met or missed solely based on the variation inherent in using a random sample. Said otherwise, the bonuses awarded are random and indefensible.

The consequence is that organizations do not reward that behavior as they intended, and are thus without the benefits they intended.

And finally, recall the beginning conversation of this article. Enhancing customer satisfaction may be completely extraneous to the well-being of your firm. Yikes!

Observation: If asked to install customer satisfaction data into your compensation formula, turn and run! Doing so completes the transubstantiation from customer satisfaction to corporate satisfaction.

In the October issue, our final article will propose some fixes to the problems of customer satisfaction that help to avoid the worst consequences of a program gone awry.