Editor's note: Matt Michel is vice president of Decision Analyst, an Arlington, Texas, research firm.

For several years now, customer satisfaction research has been the rage of corporate America. Everyone wants to know how their company rates with its customers. That's good. What's not so good is the way most companies approach customer satisfaction research and the end results of their research investment.

Frankly, most customer satisfaction research is wasted. The research is performed. The data are presented. The commandment that improvement will be made comes down from on high. Then, the study goes on a shelf, not to be revisited until the next survey cycle. Nothing really changes. Why?

Unfortunately, most customer satisfaction research fails to pass the "so what?" test. You learn that your customer satisfaction rating is 7.8 on a 10-point scale, or 147 on a customer satisfaction index, or some other statistic. Often, customer satisfaction research cannot even indicate whether your performance is improving or not. If - and it's a big if - you've kept the survey and methodology the same, you still don't know if your satisfaction improved because customers tend to change their expectations. As a rule, expectations rise over time. Performance that was considered good a couple of years ago may cause you to look like a piker today. Everywhere today, stores are staying open longer, warranties are lengthening, return policies are liberalizing, and so on. You can improve your service and still get left behind. You may get better, but see falling scores!

You get a customer satisfaction rating. . .so what? By itself, it doesn't tell you anything meaningful. It doesn't tell you how to improve or where to improve. It really doesn't even tell you if you've improved. That's why most customer satisfaction studies collect dust. They fail to add value to the business.

Customer satisfaction research is not complicated. Done correctly, it is consistent with the two quality principles of kaizen and iciban.

Kaizen is a Japanese term for continuous improvement. Adherents of kaizen make many small, incremental changes on a continuous basis. Customer satisfaction research should support kaizen. It should offer direction by revealing where improvement is most warranted and will offer the greatest return. In other words, customer satisfaction should help prioritize a company's improvement program.

Iciban is another Japanese term. Loosely, it means "being the best." Adherents of iciban strive for excellence and superiority to others, for preeminence. If others improve a little, then the company seeking iciban must improve as much or more. Customer satisfaction research should also support iciban. It should indicate a company's position relative to other companies for benchmarking purposes. In this way, customer satisfaction research indicates whether your company is improving or falling behind relative to other American icons and suggests which companies merit emulation and further study.

Kaizen-directed research

Kaizen-directed research is really quite simple. Quality guru Phil Crosby described quality as conformance to customer requirements. Satisfaction works the same way. Let the customer tell you what's important and how satisfied they are with your performance. Quantify importance and performance for each definable area of the company as the customer would see it (if the customer does not see it, they cannot rate it). Once importance ratings and satisfaction levels are determined, it becomes simple to prioritize the areas where the most improvement is needed. This is done through quadrant analysis or factor weighting.

Quadrant analysis is simply a graphical depiction showing all areas of the company plotted according to their importance and performance (see facing page for an example). The field of rating points is then divided into four quadrants: low relative importance/low relative performance, low relative importance/high relative performance, high relative importance/high relative performance, high relative importance/low relative performance. The last quadrant (high relative importance/low relative performance) is the "opportunity quadrant." Here you stand to gain the most from improvements.

In a quadrant analysis, all of the importance and performance (i.e., satisfaction) measurements are made relative to each other. This makes the prioritization possible that will transform the research into a diagnostic tool that adds value.

Factor weighting is similar in approach and results to quadrant analysis. With factor weighting, the performance measures are weighted by their importance. There are an infinite number of approaches to factor weighting. Some are relatively simple and involve little more than multiplying each performance measure by the importance measure. Others use statistical techniques, such as factor analysis or other types of regression to determine the proper weighting scheme. In the end, factor weighting results in adjusted performance measures that reveal where you should focus your improvement efforts.

Using either a graphical technique like quadrant analysis or a numerical technique like factor weighting should yield a prioritized list of areas for improvement. It highlights the areas where a performance improvement will have the most impact on overall customer satisfaction. Now that's something managers can start to act on. That's of real value.

Acting on the data

Usually, managers can explain why a particular area rises to the top of the list. Of course, this begs the question: If a manager knows an area suffers from a performance deficiency, why hasn't he or she done anything about it? Sometimes it's because there has not been the organizational will or support to make the changes necessary for improvement. In other words, the pain of making changes is perceived to be greater than the benefit. In these cases, customer satisfaction research might reveal that the benefits are greater than previously believed, leading to positive change.

Other times, there is a dispute about whether the changes are really needed in an area. This usually results when managers believe that only one or two complainers are disgruntled with the performance because they don't hear from anyone else. If the research makes it apparent that more than a small minority of customers are unhappy, it can shock the company out of complacency.

Finally, the corporate emphasis is sometimes in the wrong areas. This is the reverse of the previous case. Instead of being ignored, one or two complaining customers are vocal enough and have the right ear, to the extent that they are able to shift an entire company's strategy towards resolving their pet problems. If the research shows that most customers are fairly satisfied with current performance or that the area in question is not that important in the larger scheme of things, the company finds itself free to redirect resources where a real impact can be made.

While managers can usually cite the reasons customers identify an area as needing performance improvement, there are occasions when managers do not know, or more typically, do not agree, why a specific area needs improvement. When this happens, in-depth follow up research is warranted. The follow-up work may consist of little more than customer focus groups or it may necessitate a rigorous quantitative study. The right research methodology depends upon the particulars of the specific situation.

A moving target

A satisfaction study in any given year is, of course, merely a snapshot. Performing research on a consistent basis adds a time element. Many companies conduct annual satisfaction assessments. The annual reviews help companies keep their priorities in tune with the market. This is important because customer satisfaction is a moving target.

Satisfaction research is not a "sometimes" thing. Companies need to periodically reassess their performance, because performance changes, customer expectations change, and importance ratings change over time. This year's hot button may be replaced next year as performance and requirements change. It's easy to imagine, for example, how a company could perform satisfaction research and determine very little need for improvement in EDI (electronic data interchange). Then, in the space of a year or less, they find their customers have shifted on them, as more and more companies become comfortable with the Internet, or as a related supplier sets a new standard. Satisfaction research should be repeated annually at least.

Sometimes it is simply not appropriate to wait a full year to conduct satisfaction research. For some companies, a year is an eternity. Typically, these organizations operate in fluid markets where change occurs rapidly. Waiting a year entails the risk of missing a market shift. When the changes are identified, the company is no longer in front of the trend and must engage in recovery procedures. In a turbulent industry, they may never catch up. Thus, they engage in ongoing tracking to plot trends on a monthly or, in a few cases, weekly basis. This allows them to see a shift in expectations or a hit in performance at an early stage when diagnostic efforts can determine the source of the change in time to develop countermeasures before too much slippage occurs.

Satisfaction tracking should not replace the annual review. This is because satisfaction tracking centers on a few key indicators to maintain executional speed and to remain affordable. The annual study generates in-depth analysis that helps direct improvement efforts on a broad scale.

Iciban-directed research

Customer satisfaction research that only supports the principle of kaizen points the way, but does not reveal the distance a company must travel. In other words, kaizen-directed research shows where improvement efforts should be directed, but does not reveal how well a company is performing. There is no benchmark for comparisons. To support the concept of iciban, or being the best, companies need to know how far they must travel. They need the ability to compare themselves to others. In short, they need benchmarks.

Benchmarking can occur in three ways. The first is to benchmark against direct competitors. The second is to benchmark against noncompetitive companies who also supply your customer base. The third is benchmarking against leading corporations who may or may not share your customer base.

  • Comparisons with direct competitors. During a customer satisfaction study, it is easy to benchmark against the competition if customers use two or more suppliers of a product. If they do not, it may be necessary to construct a parallel research study of competitors' customers. Ideally, of course, the respondents in the study would have direct experience with you and with your competition.
  • Comparisons with non-competitive suppliers to your customer base. Any supplier to a company's customer base is a potential benchmark candidate. Sometimes noncompetitive suppliers to your customers provide a more relevant measuring stick than your direct competitors. For example, you may offer the best technical support in your industry and still be perceived as a poor performer because your entire industry underperforms. Remember, your customers only know what they experience. They might see superior technical support coming from suppliers in other industries and may not be intimate enough with your competitors to know that you offer the best in your industry. You can offer the best technical support in your field and still lose customers, if the issue is important enough to them. They may change suppliers assuming everyone is better than you. All they know is what they experience with other suppliers, from other industries, all of which might exceed your performance.

Furthermore, identifying noncompetitive suppliers who excel in specific areas is often more useful than identifying your strengths and weaknesses vis-à-vis direct competitors. A noncompetitive supplier is more likely to perform a quid pro quo where your people examine the areas where they excel in return for letting their people examine an area of your company that interests them.

  • Comparisons with leading American corporations. Finally, it is also possible to determine how a company stacks up against a number of leading U.S. corporations by constructing a satisfaction index that is comparable with the American Customer Satisfaction Index (ACSI), developed by the University of Michigan. Since Michigan's methodology is public domain, it is possible to design a customer satisfaction study in such a manner as to allow a company to be able to compare its performance with such household names as Nordstrom, Federal Express, Southwest Airlines, Wal-Mart, and even the U.S. Postal Service.

Because an ACSI type of rating is of the total company, it does not result in the in-depth, department-level metrics that allow managers to understand why and where a company might outperform them. Nevertheless, many companies find it valuable. It gives managers and employees an easily understood reference point for evaluating their performance.

A comparison to the index helps companies understand whether they are making gains over time relative to other leading corporations. Remember, satisfaction is a moving target. Consumer expectations are not stagnant; neither is corporate performance. Your company may show a year-to-year gain in performance only to discover that your relative position amid the leading U.S. corporations slipped because you weren't improving fast enough.

Conversely, consumers may be crankier and more demanding, giving lower ratings across the board. In this environment, heroic efforts to improve may only maintain last year's satisfaction rating. In isolation, this can be very demoralizing. Viewed in the context of a broad downward movement in the satisfaction rankings of other leading companies, it may be seen as a victory.

Kaizen- and iciban-directed research down the distribution channel

Satisfaction research can support kaizen and iciban best when it focuses on the next step in the distribution channel. With each successive step in the channel, the respondents are further and further removed from the company. Simply stated, there are fewer opportunities for the company to touch the customer. Managers should be aware that any down-channel research becomes a de facto evaluation of the distribution step immediately preceding the responding company, as well as an evaluation of the things your company does that directly touch the respondent. This can be valuable in helping the channel improve, but it will not provide an unvarnished view of corporate performance and has limited diagnostic value.

So how can a company evaluate end-user satisfaction for the purposes of continuous improvement and becoming the best? Often the best choice is a form of product testing. Product testing in reality is a narrow form of satisfaction research. It deals with the key aspects of the product that an end user can assess and a company can change. Done correctly, product testing will yield the diagnostic attributes of satisfaction research, though limited to the product itself. It can also provide benchmarking against related products; the product does not need to be from the same category, but it must be related.

Truly valuable

Customer satisfaction research is important. However, it is often not useful due to flawed approaches and methodologies. To be truly valuable, satisfaction research must identify the areas where corporate resources can be allocated to stimulate the greatest overall return from performance improvements, supporting kaizen, or continuous improvement. It must also identify how well a company performs relative to others, supporting iciban, or being the best. Companies should be wary of trying to attempt satisfaction research too far down the distribution channel and, instead, consider shifting to product testing to measure end-user satisfaction. Finally, satisfaction research that meets these requirements can provide real value to its sponsor, but only if it's acted upon.