Editor’s note: Tim Glowa and Sean Lawson are principals of North Country Research, a Calgary, Alberta, research firm.

There are many approaches used for satisfaction monitoring and just as many scales for measuring and reporting the results. Unfortunately, the measurement of satisfaction is often considered to be an end goal, and the impact of achieving satisfaction ratings upon the bottom line is often completely ignored.

This article investigates common misconceptions about satisfaction measurement and the assumptions inherent in many satisfaction studies. Then, through a discussion of how satisfaction is typically measured, a new scale is introduced that provides increased descriptiveness and strategic insight for satisfaction researchers. Finally, we suggest how satisfaction measurement results can be made actionable by linking satisfaction measures to customer behavior through predictive modeling techniques.

Obsession with satisfaction

Management is seemingly obsessed with satisfaction ratings. They refer to them in reports, encourage staff to help increase client satisfaction, tie bonus packages to satisfaction indexes, and they do a lot of surveys to find out how clients feel about their product/service.

Many companies sponsor recurring satisfaction studies that are compared against previous studies in an effort to benchmark corporate performance. This satisfaction obsession has led companies to think of satisfaction measurement as an indicator of performance - a proxy for profit or market-share numbers. Although the relationship between satisfaction and corporate earnings has a visceral appeal, there is something overly simplistic about the assumption that a simple index of satisfaction has the import of a profit calculation.

The reason for this misconception lies in two common assumptions that most satisfaction measurement projects have in common:

1) that satisfaction and the bottom line are positively correlated; and,

2) that satisfaction metrics suggest a strategy for increasing satisfaction.

Let us consider these assumptions. If we invoke the fundamental economic assumption (and there is no reason not to in this context) of the “profit motive,” then it is safe to say that management should be interested in increasing revenues and decreasing costs above all else. Further, management, employees, and consultants will all tell you that higher levels of client satisfaction are better for the company, but no one discusses the cost of achieving the higher ratings. The costs are not considered to be real, or more accurately they are always assumed to be economic - worth the expenditure. Certainly, it makes sense that, as customers are increasingly happy with a product, they may demand more of it, but certainly there is a limit. This is an example of decreasing returns to investment. However, it is rare that a satisfaction measurement study considers the costs of achieving an increased satisfaction ranking or what that increased level of satisfaction (or service) would be worth to the consumer.

Satisfaction studies need to consider what the end goal of management and the corporation is - in most cases, to increase profits (or, very often, market share) - and define the linkage between satisfaction and that goal.

The second assumption (that satisfaction measurement implies strategies for improvement) is best considered in the context of some examples. Look at the ways that satisfaction is usually measured. Some scales use biased wording (e.g., “not satisfied” to “very satisfied”); some try to gauge satisfaction based on expectations (e.g., “worse than expected” to “better than expected”); others use terminology open to interpretation (e.g., “not very important” to “very important”). In the end, there is nothing about the previous measurement scales that would imply a strategy for improving satisfaction any more than a bathroom scale provides nutritional insights.

What good will it do to know that your company scored a “3 out of 5” or a “very good” when you have no idea how improve that rating or whether it is even worth improving?

Management needs more information and the burden to provide it falls naturally to the researcher.

How should satisfaction be measured?

Ideally, satisfaction should be measured in the same context in which it is supposed to exist or be provided. That is, just like you measure a person’s weight by putting them on a scale rather than asking them outright. The satisfaction of a group should be measured by examining how differing levels of service affect the choices made by the group. Now, while putting people on a scale to measure their weight is within the realm of possibility, examining an infinite number of real-life situations in which the members of a group behave in response to varied service levels is definitely not. As much as we wish otherwise, market researchers do not have all the answers.

The key to successful and meaningful measurement is getting as close to this ideal as possible. There are two main components. First, the scale used to measure performance should provide succinct guidance on how to improve the measure. Second, there should be an understanding of the linkage between satisfaction and behavior.

What is your scale telling you?

The standard measurement tool is the Likert scale (or multiple choice scale). It allows the researcher to offer a variety of options to the respondent, but as suggested previously there are some problems with the interpretation of the scales. Joseph Duket explains that according to the standard dictionary definitions “poor” means “inferior and unsatisfactory,” while “inferior” means “lower in quality, worth or adequacy; mediocre.” Finally, “mediocre” is defined as “of only average quality.” Does this mean that a “poor” rating is average? Probably not. The point here is that the researcher and respondent can become swallowed up in a sea of semantic conflations that makes meaningful analysis very difficult.

Another approach, suggested by Steven Lewis, argues that the intermediate rating terms used to describe satisfaction variables have different meanings in other countries, and proposes using a dichotomous adjective scale where only the definitive end-points are defined (e.g., such as “totally satisfied” to “totally unsatisfied”). This is an improvement over traditional scales since a “totally satisfied” customer cannot be satisfied further. However, a problem arises if management tries to act upon measurements like these. Where do they start? Simply knowing a satisfaction rating provides little direction on how a performance or satisfaction attribute could be improved.

Likert (and the related, semantic differential scale) scales are efficient ways to collect data but they often need to be augmented by qualitative responses in the respondent’s own words (written answers) to fully understand why the respondent gave the answer they did. What is needed is a hybrid between the efficiency of a multiple-choice style scale and the descriptiveness of written answers.

One alternative is to replace the standard increments of a Likert scale with carefully constructed propositions that provide the respondents with a clear articulation of their experiences. Compare the following scales from an airline satisfaction study: The first is a typical Likert scale; the second is a propositional-descriptive scale.

Scale A: Standard Satisfaction Scale
How would you rate the performance of the in-flight crew today?
  (1) Poor

  (2) Fair

  (3) Good

  (4) Very Good

  (5) Excellent

Scale B: Propositional-Descriptive Scale
How would you rate the performance of the in-flight crew today?

  (1) anticipated your needs and made you feel that they were genuinely pleased to serve you;

  (2) were pleased to serve you and provided assistance when asked;

  (3) made you feel like they were just doing their jobs;

  (4) often neglected your needs even when asked.

Clients who have used propositional-descriptive scales feel they provide a cleaner measure of satisfaction and better strategic direction. Although the above scales were not tested in the field, similar tests of standard-Likert or semantic differential scales versus propositional-descriptive scales suggest that respondents are less likely to cluster towards the middle or top of the scale, as often happens when respondents are reluctant to say something critical.

The use of propositional-descriptive scales gets us closer to understanding what strategies can be employed to increase satisfaction. If the results of the study show that airline travelers generally feel that the in-flight crew was “just doing their jobs” then that is something that can be communicated to management, who, in turn, can promote increased service levels or add more staff to an in-flight crew.

Using a propositional-descriptive scale may provide more information about satisfaction levels and how to improve them but there is more to the equation than that. What is the value of achieving higher satisfaction levels? Is it worth it?

How satisfied?

How satisfied do you want them to be? Management must answer this question. However, in order to answer it they will need to know what drives client satisfaction and how they can affect satisfaction ratings. Once the satisfaction level of a group is clearly understood in terms of its determinants then management can decide whether the efforts (and costs) required to increase satisfaction are worth it.

It should be noted that, regardless of the research methodology and tools employed, many researchers are bypassing an opportunity to increase the value of satisfaction research to the client. What good is it to know the determinants of satisfaction when the end goal is the bottom line? The vast majority of satisfaction measurement techniques fail to clearly demonstrate the linkage between changes in satisfaction perceptions and changes in market share. As such, they may be able to tell that overall satisfaction of a group will increase should the perceived satisfaction of a specific attribute improve, but there is no linkage made to the bottom line. In some cases, researchers simply assume that increases in satisfaction result in a corresponding increase in market share. There is no necessary connection between satisfaction and market share that would support this blanket assumption.

If management is ultimately concerned with the bottom line, then increasing satisfaction in a group should increase the client’s market share enough to justify the resources required to achieve the higher rating. If this is not the case then higher satisfaction ratings, for that client, have become counterproductive. The research into satisfaction must be tied to the costs of achieving higher satisfaction and whether there will be an offsetting increase in demand to justify the expenditure. Again, the approach to consider is the one that most closely resembles the decision environment of the respondents.

Discrete choice modeling and satisfaction measurement

The decision environment can best be approximated by asking the respondent to participate in a repeated set of hypothetical situations where they make decisions just as they would if they were faced with the situation in real life. The tool in this case is discrete choice modeling. (For a longer discussion on discrete choice modeling, please see the article by Steven Struhl mentioned in the section on further reading.)

Discrete choice models are derived by placing respondents in hypothetical situations where they are asked to choose between two or more competitors offering a product or service that is defined by a series of satisfaction attributes. Since respondents are stating a choice for which product they prefer, inclusive of various levels of defined service, the model is able to predict the impact on market share of not only any change in client-perceived satisfaction, but also for changes to satisfaction levels of competitors.

Discrete choice modeling allows for the inclusion of price as a factor influencing choice. By including price, respondents are able to make trade-offs between service levels and price, thereby determining how much they are willing to pay for a given level of service. Ultimately, this information can be used to calculate the amount of a premium that could be charged for providing higher levels of service.

A problem with most satisfaction measurement methodologies is that satisfaction for an individual company or organization is measured in isolation from other competitors. This unrealistic environment creates false impressions about the importance of satisfaction since in the real world, consumers can and do switch between competitors when presented with unsatisfactory levels of service - they can even choose not to participate in the market at all. Discrete choice modeling enables the respondent to clearly indicate which levels of service are sufficient enough to cause a behavioral change in purchasing characteristics.

As Bill Etter points out, the marriage between discrete choice modeling and satisfaction measurement is a perfect one. With discrete choice modeling, the data collection methodology closely resembles the actual decision-making process, in that respondents are able to evaluate all the attributes of a choice situation simultaneously rather than consider them individually. Thus, the respondents’ choices in the hypothetical scenarios reflect their own perceived value of service or product attributes in a similar marketplace.

The researcher can combine key attributes (price, competitors, etc.) with satisfaction scales such as the propositional-descriptive scales to model their market and tie current satisfaction levels to current market share. More importantly, the researcher can also measure how changes in attribute levels, including satisfaction levels, will impact market share, thus enabling the researcher to provide management with information regarding the potential revenues associated with changing satisfaction levels, and helping to answer the question of “Is it worth it?”

The end goal

If the end goal of satisfaction research is to positively affect the bottom line of the corporation then the corresponding research needs to reflect that goal. Satisfaction measurement should suggest how the rating can be improved and be able to tie the suggested strategies to cost data. Further, if management is to make informed decisions they must be aware of the revenue implications of any satisfaction initiatives. To do this, it is optimal to model the impacts of proposed changes on the decision behavior of the market. By comparing the cost and revenue side of a marketing strategy, it is possible to evaluate its effectiveness and worth. And that is the bottom line.

Selected references for further reading
(All Quirk’s articles listed are accessible free of charge by visiting the Quirk’s Article Archive at www.quirks.com.)

Moshe Ben-Akiva and S. Lerman, Discrete Choice Analysis: Theory and Application to Predict Travel Demand, 1985, Cambridge: MIT Press.

Gordon C. Bruner and Paul J. Hensel, Marketing Scales Handbook: A Compilation of Multi-item Measures, American Marketing Association, Chicago, 1994.

Tim Carvell and Jane Furth, “Americans Can’t Get No Satisfaction,” Fortune, December 11, 1995.

Joseph Duket, “Comment Cards And Rating Scales: Who Are We Fooling?,” Quirk’s Marketing Research Review, May 1997.

Bill Etter, “Customer Satisfaction and Choice Modeling: A Marriage,” Quirk’s Marketing Research Review, October 1996.

Steven Lewis, “The Language of International Research,” Quirk’s Marketing Research Review, November 1997.

Jordan Louviere, “Analyzing Decision Making: Metric Conjoint Analysis,” (1988), Sage University Papers Series on Quantitative Applications in the Social Sciences, No 67. Newbury Park, California: Sage Publications.

Steven Struhl, “Discrete Choice Modeling: Understanding a Better Conjoint than Conjoint,” Quirk’s Marketing Research Review, June 1994.