Editor's note: Bill Etter is vice president and director of research at Rockwood Research, St. Paul.

Customer satisfaction research comes in many forms and varieties. There are differences in who is measured and how they are measured. Most studies measure only the client product or service; few measure competition. There are differences in the scales used to measure performance. Some use satisfaction scales (not satisfied to very satisfied); some use expectation scales (worse than expected to better than expected); others use requirement scales (falls short of requirements to exceeds requirements). There are differences in measuring attribute importance. Most don't measure it; some use rating scales; few use constant sum techniques. There are differences in calculating attribute importance; some prefer stated importance; some prefer derived importance; some look at both. This list goes on and on.

Something is missing

Regardless of the methodology, most practitioners are bypassing an opportunity to truly extend the value of customer satisfaction information. Providing a "satisfaction report card" will increasingly fail to meet management's need for direction. What happens if we are able to change the perception on an attribute of our product or service? A change in perception will lead to a change in level of satisfaction. Will the change in satisfaction lead to a change in brand preference, and therefore a change in market share, for our product or service? These and similar questions cannot be answered by most customer satisfaction research. In what follows we suggest that with some additional data, in many cases not too much beyond what is already being collected, coupled with some ideas from the area of choice modeling, the management "value gap" of satisfaction research can be closed.

An example

Customer satisfaction data can be integrated into choice models linking satisfaction measurements to preference and, in turn, to estimates of market share. Even better, once this linkage is established, the models can be used to simulate the share impact of gains (or losses) in satisfaction. These changes can be investigated for any given product or service, including a competitors', or any combination of competitive products or services.

An example is provided by Bradley Gale in his book Managing Customer Value. Gale outlines a sophisticated system for measuring customer satisfaction, but even his system falls short of what it could be. He strongly recommends measuring not only the client's performance but also the performance of key competitors. Measurements should be taken among users and non-users of companies' products. Attribute importance should also be measured. Gale then integrates these performance and importance measures into a single aggregate measure for each product or service he calls market perceived satisfaction (MPS) 1.

This aggregate MPS score (or any score calculated in a similar fashion) is available at the individual respondent level. At that level it is very similar to a preference or utility measure as discussed in the literature on choice modeling (see for example, Urban and Hauser, 1980). In other words, we have the data to calculate a preference measure for each product or service in each respondent's consideration set.

Once we recognize this score as a preference measure, it is relatively straightforward to convert it into a choice model, again at the individual respondent level. The choice model can take the form of either a winner-take-all or probabilistic model 2. Once the choice model is established, a simulator can be applied to examine the impact on choice of changes in satisfaction levels on one or more attributes. These changes can occur for your product and/or competitive products.

In order to accomplish this transformation from static satisfaction scores to a dynamic choice simulator several things need to be done to ensure its ultimate success.

Considerations to make it happen

First, attributes need to be carefully selected based on the impact they have on choice. Affecting brand choice is the reason most companies strive to improve customer satisfaction. The image customers or prospects have of a company, product, or service may be nice to know, but if it isn't tied to choice it is of less value. When selecting attributes for satisfaction research it is important to think in the context of choice. This also means that care needs to be given to having a proper balance of attributes. That is, all attributes of choice need to be covered and no one area should be oversampled.

Second, attribute scale sensitivity needs to be measured for each respondent. Not all scale intervals are created equal, and thus points on a scale should not be treated as equidistant from each other (see Semon, 1995). To illustrate, consider an attribute dealing with automobile safety. A requirement scale for this attribute might appear as

The safety of the automobile

  Typical
Scaling

Self-
Explicated
Scaling

 Exceeds my requirement by quite a lot 5 100
 Exceeds my requirement by a little 4 90
 Meets my requirement 3 85
 Falls short of my requirement by a little 2 20
 Falls short of my requirement by quite a lot 1 0

Typical scaling of this attribute for the purpose of computing average performance scores would assign an equal interval scale such as the one shown. Asked to scale this attribute in a self-explicated fashion (Srinivasan, 1988), an individual might assign the values or utilities shown. In this example there is little reward for exceeding requirements on automobile safety (maximum gain is 15 points), but there is a substantial penalty for falling short of requirements on automobile safety (maximum loss is 85 points) 3. Self-explicated values, in addition to being more accurate than assuming, for example, equal intervals as a measure of satisfaction, are diagnostic in their own right. They identify penalty situations and reward opportunities.

Third, once one adopts a choice perspective for satisfaction research, it is clear that some attribute levels serve as choice thresholds in the spirit of conjunctive choice models. That is, a choice alternative perceived to perform at a given level of an attribute would preclude consideration of that alternative. For example, consider the automobile safety attribute above. It is certainly possible that if a respondent perceives that the safety of a given automobile falls short of his or her requirements by quite a lot that automobile would be dropped from consideration. It is especially important to identify these threshold levels because it is quite possible that improved performance (satisfaction) on one attribute may be achievable only at the expense of a lower level of performance (satisfaction) on a second attribute (e.g., improved acceleration at the expense of miles per gallon). Also, actions by one company in the marketplace may promote a loss in perceived satisfaction for a competitor's product or service on one or more attributes.

Lastly is the matter of attribute importance weights. While the type of modeling we are advocating can be accomplished with rating scales for measuring attribute importance, we are much more comfortable using constant sum measurements. Constant sum eliminates the tendency of many respondents to use relatively high scores for all attributes versus the tendency of others to use relatively lower weights 4. Also, when measuring importance it is a good idea to put the measurement task in the context of choice, e.g., "Please divide 50 points (or stickers) across the attributes to indicate their relative importance when choosing among products (services) in the category."

The choice model 5 can integrate all measures at the individual respondent level and be used to predict choice. With the simulator in place, it is a simple matter to select different change scenarios and explore the impact of the selected changes on choice behavior. The change scenarios can involve both your product or service and one or more competitors.

Thus instead of guessing about the relative impact of alternative satisfaction improvement strategies and, by inference, share improvement strategies, one can use the simulator to rank order these strategies. The link to absolute, as opposed to relative, market shares is more imprecise, but the model and simulator provide the possibility to ball-park profitability and ROI implications of alternative strategies.

Tracking considerations

Scale sensitivities, and perhaps attribute importance, are relatively stable over time. Thus, once measured, they are unlikely to change appreciably over a span of several years. This would be especially true in a relatively stable and/or mature market. On the other hand, perception of product/service performance on attributes driving choice are likely to be more variable. Thus from a tracking standpoint, measurements can be limited to product/service performance on an annual, or more frequent, basis and measurement of attribute importance and scale sensitivity measured every few years, depending to some degree on market dynamics.

Two added benefits

Having a choice model/simulation capability can point to clear direction for change on ideal point attributes 6 assuming they are appropriately scaled. When satisfaction is measured on typical scales (e.g., a five- or seven-point scale from very unsatisfied to very satisfied) low levels of satisfaction on ideal point attributes are difficult to interpret. As an illustration, suppose 40 percent of a company's customers are dissatisfied on such an attribute. A real opportunity, right? But what if half the 40 percent want to move in one direction, for example more carbonation, and half in the other direction, less carbonation? A change in one direction will make the product worse for half of those already dissatisfied plus affect in a negative way the satisfaction level of the 60 percent who are already satisfied. Such situations are characteristic of attributes which have ideal points somewhere in the middle of the possible range of values. For such attributes dissatisfaction is not necessarily an opportunity. Using a requirement or expectation scale or a scale with units of the attribute (e.g., carbonation measured from a little to a lot of carbonation) along with self-explicated values will allow a proper interpretation of dissatisfaction. The choice model/simulator will allow for estimating the impact of movements in either direction on preference or share.

A second benefit of looking at customer satisfaction research through the eyes of a choice model is the ability to incorporate the concept of brand equity as discussed by Park and Srinivasan (1994).

Consideration of this topic is beyond the scope of this article, but the main requirement, besides some of the ideas introduced here, is an objective or expert measure of brand performance on the attributes.

Enhance value

Putting customer satisfaction measurement in the context of self-explicated choice models can enhance the value of this important research area by allowing management to truly understand the strategic implications of satisfaction improvement strategies for their or competing products or services. All that's required is a wedding.

Notes

1 Actually market perceived satisfaction is a weighted average of two sub-scores called market perceived quality and market perceived price.

2 In a winner-take-all model the alternative with the highest preference score is the "winner"; in a probabilistic model preference scores are converted to probabilities using either a constant utility or random utility model (see Ben-Akiva and Lerman (1993) for a discussion).

3 For other attributes there may be a substantial reward for exceeding requirements and little penalty for falling short.

4 A two-cluster solution on attribute importance ratings frequently results in one cluster with relatively high scores assigned to all attributes and the second cluster with relatively low scores across all attributes. This necessitates centering the data, an unnecessary step when using constant sum measurement.

5 The choice model advocated here takes the form of a conjunctive-compensatory model (Srinivasan 1988).

6 Ideal point attributes are ones with the ideal point in the interior of the scale, e.g., amount of carbonation in a soft drink, in contrast to vector attributes, where more is always better, e.g., safety in the automobile.

References

Ben-Akiva, Moshe, and Steven R. Lerman, Discrete Choice Analysis: Theory and Application to Travel Demand, Cambridge, Mass.: The MIT Press, 1993.

Gale, Bradley T., Managing Customer Value, New York: The Free Press, 1994.

Green, Paul E., and V. Srinivasan, "Conjoint Analysis in Marketing Research: New Developments and Directions," Journal of Marketing, 54 (October 1990), 3-19.

Park, Chan Su, and V. Srinivasan, "A Survey-Based Method for Measuring and Understanding Brand Equity and Its Extendibility," Journal of Marketing Research 31 (May 1994) 271-288.

Semon, Thomas T., "Weighty Debate - Didn't Cover Weights" (Letters to the Editor), Marketing Research, 7 (Winter 1995), 4-5.

Srinivasan, V., "A Conjunctive-Compensatory Approach to the Self-Explication of Multiattributed Preferences," Decision Sciences, 19 (Spring 1988), 295-305.

Urban, Glen L., and John R. Hauser, Design and Marketing of New Products, Englewood Cliffs, N.J.: Prentice-Hall, 1980.