Editor's note: Dinyar Chavda is a partner at Chavda Associates Research and Consulting, Bala Cynwyd, Pa.

Customer satisfaction measurement (CSM) has cost companies millions of dollars and months of manpower. In addition, attempts to improve satisfaction, including trying to change employee behavior, exhortations to the front line, and linking satisfaction to compensation have resulted in more expenditures of both money and time. Research companies and research departments have used a variety of statistical techniques to understand the drivers of customer satisfaction either for the company as a whole, or for specific departments, such as call centers. These analyses purport to show the key levers (such as responsiveness, courteousness, speed of resolution, etc.) that, when manipulated, should result in improvement.

Despite all these efforts, most graphs of satisfaction scores resemble what one client called "the monitor of a patient on a life-support system," i.e., random squiggles around a norm. (And researchers spend a lot of time trying to analyze and explain these random variations!) In fact, it is remarkable that companies still persist in collecting and trying to improve their satisfaction scores, using the same approaches from year to year.

So why do companies bother trying to improve customer satisfaction? The cynical may say that it is not PC to cancel a customer satisfaction study, or to stop customer satisfaction enhancement efforts. However, there is actually a very good business reason for trying to improve customer satisfaction - in most industries, the product (or service) of a company is at parity with that produced by its competitors. Even if a company develops something that gives it a competitive advantage, in most cases this is only short-lived, as others will reverse-engineer similar products. What does differentiate one company from another is the customer experience it provides, or, in the words of Michael Hammer, whether the company is easy to do business with. These days, it is how a product or service is delivered, not the product itself, that can lead to loyalty and commitment on the part of customers. In other words, it is all about execution. As Vince Lombardi once said, "You can look at my playbook, but you still have to meet me on the field."

Thus, it is logical for companies to persevere in measuring and trying to improve their customer satisfaction. Measuring it is not difficult. The problem lies in just how to improve it.

Why satisfaction is difficult to improve

There are several reasons why customer satisfaction remains unchanged:

a. Obliquity

Customer satisfaction and its drivers obey what Michael Hammer calls the "principle of obliquity." This is a wonderful term which means that, while we can measure them directly, we can only influence them indirectly. For example, it should not surprise anyone that one of the key drivers of satisfaction for a restaurant is courteous service. One way to improve the customer experience could be to provide more courteous service. How does one go about doing it? It is impossible to do it directly - one can only alter things that employees do that result in a perception of courteousness on the part of the customer. And therein lies the rub!

Even when it comes to more direct issues, there are still problems in fixing problems. Consider the effect of putting someone on hold in a call center. If you decide to improve perceptions on this, what should you tell your CSRs to do? Should they never put anyone on hold? Is it acceptable to put someone on hold as long as permission is obtained? Do they also need to explain why the person is being put on hold? Or is it an issue of how long the person is kept on hold?

b. Management opinion drives company behavior

Take a restaurant company where the research has shown that courteous service is a driver, and that there is room for improvement. How does the company act on this? Usually, there is a meeting where someone presents the research, and then people brainstorm about ways to increase courteous service. There is usually no dearth of ideas of how this can be done, and a long list is generated, some of which target specific behaviors and others are imagery-related. At this stage, an argument often ensues about which of the actions will have results that are really desired by customers. The list of actions is winnowed down based on "management judgment" (or feasibility) until a final compromise list is arrived at. Then, attempts are made to institutionalize this. However, there is very little evidence to support the choices, and, therefore, little reason to persevere when things don't improve quickly.

It is also highly likely that many of the issues that are important to customers will never be considered, because the company is looking at courteousness from its own viewpoint, and not from that of the customers. For example, they may not consider the process by which telephones are answered and reservations made, or the wait time when a person arrives or speed of service. Even if these things are taken into account, what becomes the standard for courteous service may be a function of where the company is headquartered, as that forms their mindset. If you go to an upscale restaurant in New York for lunch, you expect to be seated very quickly, with the waiter there immediately at your table to take your order and rush you your food, then deliver your check almost while you are chewing your last bite! The thinking here is that your time is very valuable, and so, unless otherwise instructed, the restaurant is going to make this a fast experience for you. The same kind of treatment in the Midwest would be considered extremely rude, and people would think that you were trying to get rid of them! Variations in desired behaviors like this are rarely taken into account in setting standards for service, and so it is not surprising that measurable improvements rarely occur.

c. CSM measures only part of what is needed

The schematic in Figure 1 may best represent the predicament that companies face. For example, a bank may have several work processes for dealing with customers at the teller's window. These work rules that the bank has (e.g., greeting a customer by first name or last name, asking how they can help the customer, explaining procedures, etc.), result in behaviors that the customer experiences, based on which the customer develops an impression of the bank and the attitude that the bank has towards him or her. Current CSM methodology does a fine job of measuring these final attitudes (courteousness, responsiveness, speed, knowledge, etc.), but falls short in identifying the specifics of what to do to improve, as it does not know which of its multitude of actions are responsible for each of the final attitudes that the research measures.

Figure 1

d. CSM measures the wrong things

Companies tend to measure what is easy to measure ("courteousness," as opposed to the specific behaviors experienced by the customer), what they have always measured (one telecomm client insisted on using the same highly technical terms in their small-business customer satisfaction study as they used when dealing with telecommunication specialists in large companies), and what matters to them, not necessarily the customer. For example, many doctors measure and attempt to control the amount of time between when a patient comes to their office, and when the patient is admitted into an examination room. The fact that the patient then waits for 30 minutes or longer for the doctor to enter the room is ignored!

An alternative approach

One approach that has been advocated and used by companies is to build a model that links the various actions that the company can take to resultant customer perceptions and on to levels of satisfaction. While these have reportedly been successful, they tend to be complex, difficult to develop, and often have a "black box" element to them.

a. The basic concept

The alternative that we have used takes a much simpler and more direct approach. Our approach involves measuring performance and needs in terms of the actual behavior experienced by the customer. So for the earlier call center example of placing someone on hold, instead of determining ratings of "being put on hold" on a 5-/7-/10-point scale, we first create a list of the possible levels of performance that could occur (see Figure 2).

Figure 2

We then determine which one actually occurred, as well as the resultant satisfaction of each level of performance, which allows you to compute the impact of changing behavior (as explained below).

b. Data collected

Continuing the call center example, Figure 2 shows a typical questionnaire that gets at desirability, importance and performance:

c. Analysis method

Using a self-explicated conjoint approach (as originally proposed by Paul Green), we compute the utility of each level of performance, and the marginal utility of improving your performance from one level to the next. In other words, what would happen to satisfaction if you were to change the customer experience?

d. Interpretation of results

Using the example of being put on hold, Figure 3 shows a hypothetical result.

Figure 3

The horizontal bars show the utility for each level of the behavioral scale. The scale has a zero point, corresponding to neutral. Bars to the left show actions which customers deem negative; bars to the right show actions that are considered positive. The arrows show how much marginal utility is gained or lost by changing performance from one level to the other.

By analyzing this in conjunction with actual performance, companies can determine exactly how they are performing, and the effect of changing their actions.

In this example, it is apparent that, while not being put on hold is optimal, customers do not mind too much if they are put on hold as long as their permission is obtained and they are told why. However, they get extremely annoyed if the reason is not given. Also, this company is not performing at acceptable levels in about 25 percent of the calls it gets.

Similar charts are created for each behavioral issue experienced by the customer.

e. Actions to be taken

Using the chart above, this company could try to improve by not placing the 64 percent on hold at all, but that would probably be very difficult to do, and would not lead to much higher satisfaction for this group, as the marginal utility of improvement is low. Also, it is possible that this would also increase the total call time.

Instead, it should focus on the bottom 25 percent. Of these two groups, the 12 percent who are not told why they are put on hold can be satisfied fairly easily - just tell them why they are being put on hold. In all likelihood, this company has a policy of doing this, but it is not being universally followed, and should be emphasized in future training sessions, together with the reason for doing so.

For the 13 percent who are put on hold for so long that they hang up, further internal investigation is required to determine why they are encountering this treatment (root cause analysis).

f. More examples of results and actions

In Figure 4, if the results were as shown in "A," the company does not have to be quite as vigilant in ensuring delivery at the highest level on that issue, so long as it does not drop to the lowest level. However, if the results were as presented in "B," there is no margin for error. Knowing where you can afford to relax your standards allows you to reallocate your resources more efficiently.

Figure 4

In the example shown in Figure 5, customers want one-call resolution but it does not have to be on the original call made by the customer; so long as the customer does not have to make a repeat call, he is quite content. While this may seem like a minor difference, it can result in a large operational saving for the company if it does not have to install systems to ensure that the problem is resolved while the customer is on the initial call.

Figure 5

This approach, which isolates the impact of changes that a company can make on each issue, requires a lot more work up front, but results in a much clearer picture of the various actions that a company needs to take to make improvements.

The actual research process encompasses the following steps:

1. Review current research, conduct extensive, in-depth interviews with key management, frontline workers and, optionally, external qualitative - used to develop behavioral scales and levels.

2. Scales and levels reviewed by operational personnel to ensure that they are actionable (and get buy-in).

3. Quantitative data collected.

4. Analysis and presentation to management and operations.

How companies have used this technique

Companies have productively used this approach to:

1. Improve customer satisfaction. Because the technique is based on very specific actions that the company can take, it is relatively easy to determine what needs to be done in order to make improvements.

Shortly after a study on their call center was completed, other circumstances led our client to institute a spending freeze. Using the results of our study, the client made improvements on every issue which did not require any expenditures. For the first time in years, customer satisfaction went up.

Another client was unwilling to share the results of the research with the front line until management had developed "the appropriate presentation." However, operational employees obtained a copy of our presentation, and, unbeknownst to management, started to make changes in their procedures. Satisfaction, which had been flat, went up significantly and substantially.

2. Determine how to cut costs with minimal negative effect. An Internet retailer had to make large cuts in its operating costs. In collaboration with various groups in the company, we made a list of the various actions that management could take which would result in cost savings, and the expected impact that these actions would have on the customer. These included specific areas of reduced service, impact on delivery time, the way items would be delivered, number of items stocked, etc., as well as imagery-related issues. All of these were expressed in very specific terms, and a study was conducted using this method. It showed clearly the changes that would have the most negative effect on the customer, as well as others that would have minimal impact, as long as certain minimum standards were maintained on that issue. In fact, in some cases, the change, while resulting in lower costs to the company, was actually preferred by the customer.

The company made many of the changes that the research indicated would have no negative effect, but also had to make some that were predicted to have negative fallout. Subsequent research validated our predictions.

3. Reallocate resources to get the most bang for the buck. A call center kept emphasizing courteousness in their monthly training and the CSRs focused on this issue to the point where it was not yielding additional benefits. Obviously, while no one wanted rude CSRs, the research showed that customers were more desirous of additional information than more courteous behavior.

4. Convince frontline people to sell. Many companies are under tremendous pressure to reduce or eliminate the cost of the call center. In response, some have tried to make their CSRs sell additional products or services to customers who call in. This is usually met with tremendous resistance from the CSRs who believe that customers do not want them to do so.

In many cases, our research shows that customers actually welcome attempts to sell them additional products or services, as long as they are appropriate to their specific needs. This has helped overcome CSR reluctance, particularly when coupled with subsequent qualitative research where they can watch customers discuss this.

5. Reengineer work processes. One study we conducted showed (to no one's surprise) that one-call resolution was a key satisfier. The company instructed its employees to try to do this as much as possible. So, when a customer called to say that they had lost their membership ID card, the CSR sent out a new one the next day. A week or so later, the customer called again, requesting the same ID card, as they had not received it, and the CSR immediately sent out a new one. It was only after the irate customer called three or four times that it was discovered that the customer had moved, and the replacements were being sent to the wrong address. The work process had to be changed so that the CSR now confirms the address before sending the card.

Other problems have required simple changes, such as introducing two employees to each other and requiring them to communicate on certain issues. Yet others have necessitated major investments in computer systems in order to be able to consolidate information and provide it where needed.

6. Reorganize the company to be able to deliver services better. Many companies are organized in a manner that may make sense from an internal viewpoint, but often does not from a customer's. Typically, they force customers to deal with multiple departments, many of which do not communicate with each other, thus requiring the customer to have to do so.

Occasionally, one sales department cannot access information regarding the customer's business with other sales departments, and therefore treats large customers of other departments as though they were strangers to the company.

Finally, the responsibility for the quality of the customer experience often resides in no one individual or department, and, as a result, can fall through the cracks.

While making organizational changes to better serve the customer is a huge undertaking, some clients have started to do so.

7. Increase acceptance of customer satisfaction research by frontline employees. The typical CSM study is viewed by the frontline, justifiably, as a tool used by management to beat up on them - they are told to do better, but are either not given specific direction, or given conflicting instructions. The approach outlined in this article gives clear guidance as to the actions that are required. In fact, part of the process is to obtain buy-in from the operational groups prior to conducting the research to ensure that the questions are worded in a way that gives them information they can use.

One of the additional outcomes of the research is that it often reveals how what the customer wants may conflict with other objectives the company has and the inconsistencies between customer experience requirements and how the operational people are evaluated and rewarded. These have to be reconciled.

8. Improve relationships within the company. Relationships within the company can improve because the operational people are not made to feel as though management is harassing them to improve without providing direction.

Further, by sharing the results across the company, clients create a common language to describe what customers want, as well as agreement regarding what the customer wants.

Easily overwhelmed

One of the concerns that is often expressed by management is that they can easily get overwhelmed by the amount of information contained in these studies, and do not know how to develop solutions to meet customer needs. The approach expresses customer needs in terms of their experiences; there are often a multitude of ways and processes that can result in the desired end. It is here that the workers on the front lines can be of most use, as, after all, they are the ones closest to the customer. They are often most creative in developing ways to solve problems in simple, often inexpensive ways. In this regard, the following quote from General George S. Patton is useful for management to remember:

"Never tell people how to do something. Tell them what you want accomplished, and they will surprise you with their ingenuity."

The role of management

In order for the above processes to work, and for the changes to occur, senior management has to be fully engaged. They need to become and stay visibly involved in the whole process. The CEO of one of our clients has attended the presentation of the results of this study several times to show how important it is to him, and because he claims to get new ideas every time he attends.

Cost of change

 It is certainly possible that attempting to implement some of the changes indicated by this type of research can be expensive (e.g., installing a computer system to keep track of every interaction with a customer is a major capital cost). However, our clients have also informed us that they have often been able to make many of the changes at no additional cost, or have actually saved money by doing things the way the customer wanted.

For example, one client had a large number of P.O. boxes for clients to send different forms to. The original reason was that this would be more efficient for the company, as the forms from each P.O. box would go to the appropriate department in the company. However, when they eliminated the vast majority of the box numbers in order to be more in line with what the customer wanted, their total costs actually went down, as they were able to eliminate the positions of those whose prior job was to ensure that the form had been sent to the right box, and return it if it was not.

Observational, not remedial

In many companies, the current CSM system is an observational tool, but not a remedial one. By following the methods outlined in this article, firms can determine which activities will lead to results desired by customers, thus uncovering the true drivers of customer satisfaction.