Editor’s note: This article was supplied by Thomson Marketing Resources, a Boston firm providing marketing research and strategy services to businesses within the Thomson Financial and Professional Publishing Group, part of the Thomson Corporation.

Imagine that you’ve purchased a new car and a year later you receive a survey in the mail from the auto manufacturer. They want to know how satisfied you are with the car. There is a host of questions asking you to rate satisfaction with everything from the car’s design features to your experience at the dealership to your maintenance visits. Because you (and several hundred others) dutifully answer each question, the auto manufacturer amasses a wealth of data. Now, their researchers can compare the relative satisfaction ratings of the various features measured. They can crosstab by demographics, psychographics, car make and model, etc. But there’s one bit of critical information the individual ratings data doesn’t tell them - how important the features are in influencing your overall satisfaction rating. For this, they can rely on a relatively simple yet relatively under-used analysis method: derived importance.

In today’s lean and mean corporate economy, companies must allocate precious resources carefully to ensure maximum profits from their efforts. Derived importance analysis is a means of identifying the key areas on which a company must focus to ensure high satisfaction levels among customers (and high repeat or additional business). In the past, discovering that a majority of owners were less satisfied with the tail fins of a car than they were with the warranty would have sent manufacturers scurrying after development dollars for a full body redesign. What if tail fins actually are a minor annoyance, though, and the big-ticket decision influencer is the flexibility of financing options? The company has just spent millions to "fix" a minor problem while doing little to defuse a ticking time bomb for future sales.

Nuts and bolts of derived importance

Derived importance is calculated by correlating the mean satisfaction rating for each feature with an overall satisfaction rating. The calculation itself is relatively simple and can be executed in a standard spreadsheet program (e.g., Excel). The results then are graphed in a two-by-two format, which allows companies to see where they perform well and where they have problems in the eyes of customers.

Figure 1 is a sample of a two-by-two derived importance graph. The "x" axis measures the importance of product features, while the "y" axis measures how satisfied customers are with the features. The "Keep up the Good Work" ratings of high importance and satisfaction appear in the upper-right quadrant. Inevitably, though, there are some plot points in the lower right quadrant - the "Problem Areas," where product features are important to customers but are not measuring up. This is where you should focus resources to improve satisfaction ratings. The upper left quadrant shows features whose satisfaction ratings are high, but importance is low. This is where you could trim resources and redirect them to features that are more important to your customers.

Why the mathematical cloak-and-dagger? Why not just ask survey respondents to rate the importance of all features as well as satisfaction? Direct questions don’t work in these cases. Very often, survey respondents will tell you that everything is important. (Picture yourself answering the following questions for a health insurance plan: How important are low premiums? How important is flexibility in provider selection? How important is quality medical care?).

Designing the survey

Carefully designed surveys are the key to successful application of derived importance analysis. While there are schools of thought to the contrary, we typically place the question rating overall satisfaction with the company or products up front in the survey. In this way, we get the customer’s true read on general satisfaction, before they are (perhaps) influenced by subsequent questions focusing them on specific product features.

Giving respondents an odd-numbered ratings scale ensures that they are not forced to choose a rating on one side or another of the scale. For example, if you had only a four-point scale and a respondent was truly neutral on the feature, he would be forced to choose either a "two" toward the poor ratings side or a "three" toward the excellent ratings side. Neither would be a true reflection of his feelings. But give him a five-point scale, and he’ll most likely choose a "three."

It’s important to put a lot of thought into deciding what features you want customers to rate. For this, we typically execute qualitative research among the client base to define and refine the list of features to test in the quantitative survey. If you give respondents too lengthy a list of features to rate, they may be daunted by the survey length, and your response rates might suffer. We’ve found that responses begin to wane after five satisfaction rating questions. If you are testing a captive audience (like an internal employee group), or you have a group of very eager respondents, you may get away with a lengthier list.

In some cases, the two-by-two graph results aren’t always a clean, neat division of plot points within the four quadrants. What do you do if you have clusters of features near the midpoints? When this happens, we generally go back to the data and look at various segments. In some cases, one segment could be skewing all the data. (That’s another reason it’s important to keep the survey at a reasonable length: the greater number of responses, the more you can go back to the data and segment without losing statistical significance.)

A case study

In 1997, we conducted a benchmarking customer satisfaction survey for a client in the book-publishing field. The client was most interested in determining how customers perceive them overall, and how satisfied customers were with their individual products. We performed derived importance analysis on the survey data to illustrate to our client how customers perceived their performance on various product features, and which of the features contributed most to the customers’ overall satisfaction ratings.

While the client knew prior to the survey that they had a couple of "problem" areas to address, the derived importance analysis and two-by-two graph yielded an unpleasant surprise: Customers rated contact with salespeople as very important, but gave our client’s salespeople a rather low satisfaction rating. As the customer relationship is critical for repeat business and referrals, our client took swift action. Within a year, they had reorganized their sales force and shifted their focus to more consultative selling, in which the salesperson maintains a close relationship with the customer even after the sale has closed. The following year, when our client conducted their second survey, satisfaction ratings for the sales process were significantly higher and more in line with the importance ratings.

Without derived importance analysis, this publisher may not have discovered this hidden problem until it was too late to gain back lost market share. And, by committing to a regular benchmarking survey of their customer base, our client is able to measure the success of their efforts.

With a little thought up front, derived importance analysis can be conducted on any survey data. When your products and budgets are on the line, it’s too good a tool not to use.