Editor's note: Patrick McPhillips is research manager, and Eleonora Malpa is marketing coordinator, with FIND/SVP, New York.

The main objective of any customer satisfaction survey is to isolate key areas for improvement. But just how does a company know which of the targeted areas will bring the greatest value to the customer and, thus prove a worthwhile investment of time and resources?

There are several analyses commonly applied to the data resulting from quantitative customer surveys. The first set of analyses ranks the importance of supplier selection criteria. The second set gauges customers' perceptions of a company's performance and provides an assessment of competitors' performance. Often the analysis process stops here, overlooking a vital component: the regression analysis.

Regression analysis can pinpoint the areas that have the greatest impact on customer satisfaction. It is effective in illustrating the impact on performance one product or service issue (independent variable) has on overall customer satisfaction (dependent variable) with the company. This helps companies identify what to concentrate their resources on to maintain profitability and ensure long-term success.

While it is rather simple to gather the necessary data and information from current and prospective customers through telephone or other surveys, analyzing the data appropriately is not. In a typical customer satisfaction survey, respondents are asked to rate the importance of several supplier selection criteria using a scale of anywhere from four to 10 points. These criteria have been determined ahead of time through an exploratory or qualitative phase where in-person or in-depth interviews are conducted with industry experts. The idea is to include as many important selection criteria in the survey as possible, keeping in mind that the interview will last between 15 and 18 minutes.

There are two common ways of determining the importance of supplier selection criteria in customer satisfaction surveys: calculating the mean rating and calculating the Top 2 Box scores based on a frequency distribution. Both provide simple numerical rankings. For these approaches, the t-test is an effective statistical tool which helps determine if there are statistically significant differences between the importance of two given issues. In other words the t-test can help qualify the numerical ranking provided by the mean and frequency distribution.

The simplest way of determining the importance of the various issues is to calculate the mean rating given by the respondents for each issue. If the scale used is as follows,

Sample Scale

Critical                             4
Very Important                 3
Important                        2
Somewhat Important        1
(Not Important                0)

(There should not be any issues that are not important, although you can allow for the rare respondent who insists that a particular item has no importance to him/her.)

then the resulting ranking might look like the following (in decreasing order of importance):

Fictitious Data

Product quality                      3.74
On-time delivery                    3.68
Price competitiveness             3.65
Conformance to specifications 3.42
etc. etc.

The t-test is then used to determine whether the difference between the means of two issues is statistically significant. Based on the above fictitious data, a t-test could tell us that the importance rating received by product quality (3.74) is greater than the importance of price competitiveness (3.65) but not more important than on-time delivery (3.68). In this example, the t-test shows that even though product quality appears to be more important than on-time delivery, statistically, it is not.

Another method used to calculate importance is to look at the Top 2 Box scores for each supplier selection criterion. This approach requires calculating the frequency distributions for each criterion. A frequency distribution simply shows what percentage of the respondents rated a particular issue as being critical, very important, important or somewhat important. The result would look something like this:

Fictitious data

Product quality: 58% Critical; 27% Very Important; 85% Top 2 Box = Critical + Very Important; 10% Important; 5% Somewhat Important; 0% Not Important

Price competitiveness: 46% Critical; 28% Very Important; 74% Top 2 Box = Critical + Very Important; 18% Important; 6% Somewhat Important; 2% Not Important

On-time delivery, etc.: 52% Critical; 20% Very Important; 72% Top 2 Box = Critical + Very Important; 7% Important; 20% Somewhat Important; 1% Not Important

The criteria are ranked in decreasing order of importance according to their Top 2 Box scores. Again the t-test is applied to determine whether or not the difference between issues in the Top 2 Box is statistically significant. Some feel that this approach is more valid than simply taking the mean.

Competitive performance

Once the importance ranking of the supplier selection criteria has been determined, performance in specific areas needs to be evaluated on a competitive level. A rating scale is used to assess the performance of one or more suppliers.

Sample Scale

Excellent        6
Very Good     5
Good            4
Fair               3
Poor             2
Very Poor      1

Like importance, performance scores can be calculated in more than one way. The performance means can be calculated for each supplier giving a result like the following:

Fictitious data

Product quality
Supplier A's Score 4.75
Supplier B's Score 4.88
Supplier C's Score 5.23

Price Competitiveness
Supplier A's Score 4.26
Supplier B's Score 5.51
Supplier C's Score 4.47

On-time Delivery
Supplier A's Score 5.01
Supplier B's Score 3.79
Supplier C's Score 4.59

or the Top 2 Box scores can be calculated based on the frequency distribution giving a different type of result:

Fictitious data

Supplier A
Product quality: 26% Critical; 53% Very Good; 79% Top 2 Box = Excellent + Very Good; 10% Good; 6% Fair; 5% Poor; 0% Very Poor

Price competitiveness: 32% Critical; 49% Very Good; 81% Top 2 Box = Excellent + Very Good; 12% Good; 7% Fair; 0% Poor; 0% Very Poor

On-time delivery, etc.: 75% Critical; 15% Very Good; 90% Top 2 Box = Excellent + Very Good; 10% Good; 0% Fair; 5% Poor; 0% Very Poor

Supplier B
Product quality: 31% Critical; 19% Very Good; 50% Top 2 Box = Excellent + Very Good; 35% Good; 10% Fair; 3% Poor; 2% Very Poor

Price competitiveness: 19% Critical; 68% Very Good; 87% Top 2 Box = Excellent + Very Good; 5% Good; 3% Fair; 5% Poor; 0% Very Poor

On-time delivery, etc.: 22% Critical; 59% Very Good; 81% Top 2 Box = Excellent + Very Good; 19% Good; 0% Fair; 0% Poor; 0% Very Poor

Supplier C
Product quality: 26% Critical; 70% Very Good; 96% Top 2 Box = Excellent + Very Good; 4% Good; 0% Fair; 0% Poor; 0% Very Poor

Price competitiveness: 31% Critical; 57% Very Good; 88% Top 2 Box = Excellent + Very Good; 8% Good; 3% Fair; 1% Poor; 0% Very Poor

On-time delivery, etc.: 29% Critical; 60% Very Good; 89% Top 2 Box = Excellent + Very Good; 10% Good; 1% Fair; 0% Poor; 0% Very Poor

Analyzing suppliers' performance on a competitive level is a critical step in customer satisfaction measurement. In both analyses of performance, comparisons can be made between the different competitors on each supplier selection criterion. Again, statistical significance must be considered. Is supplier B really outperforming supplier A on price competitiveness as indicated in the two charts above? Just as the t-test determined statistical significance of the importance of the supplier selection criteria, different analyses of variance can be run to determine statistically significant differences between the performance of the different suppliers on these criteria.

Overall satisfaction

At this point the company knows what is important to the customer, how well it is performing and where it stands versus the competition. It seems that the company has all of the information necessary to determine where to focus improvement efforts. Here is where analysis efforts often stop. However, overall satisfaction must be considered in addition to the impact of each issue on it. Are the issues which respondents claim to be critical or very important the same issues that impact their overall satisfaction most strongly?

First we need to obtain overall satisfaction scores for each supplier. This requires an additional question in the questionnaire where we would use the same performance rating scale as before and ask the respondents to state their overall level of satisfaction with the different suppliers. After the overall satisfaction scores have been calculated, regression analysis is instrumental in pinpointing the supplier selection criteria that have the greatest impact on overall satisfaction.

Identification of areas of opportunity

The major analytical steps that should be a part of a comprehensive customer satisfaction effort have now been covered. The first step was to establish how important various supplier selection criteria are to customers and to use a t-test to determine whether or not any statistically significant differences exist between the stated importance of any two given issues. The next step was to evaluate the performance of different suppliers on these selection criteria and again determine whether or not statistical differences exist between the performance scores of the different suppliers on any given issue using an analysis of variance. Finally, regression analysis addresses the question of overall satisfaction in a more direct manner, determining which selection criteria are most strongly correlated with it.

In this example the regression analysis may show that issues other than those cited as being critical or very important by the respondents are the ones that most strongly affect overall satisfaction. The data above shows that product quality, price competitiveness and on-time delivery are the issues with the highest stated importance. There may be other issues tested in the survey such as the responsiveness of technical service, the product knowledge of representatives and order status updates that have less stated importance but a higher correlation to overall satisfaction. With this added information offered by the regression analysis, the decision regarding where to focus improvement efforts and where to allocate resources is not as simple as was first believed. Fortunately, it will now be possible to make a more educated decision.

We can now place each supplier selection criterion into one of seven categories:

1. Issues of high stated importance.

2. Issues of low stated importance.

3. Issues strongly correlated to overall satisfaction.

4. Issues weakly correlated to overall satisfaction.

5. Issues where we have a competitive advantage over other suppliers.

6. Issues where no supplier has a competitive advantage.

7. Issues where other suppliers have a competitive advantage over us.

With the results of the t-tests, analysis of variance and regression analysis, it is now clear that the categories of greatest interest are No. 3 (issues strongly correlated to overall satisfaction) and No. 7 (issues where other suppliers have a competitive advantage over us). Since there may be several issues in category seven (areas where we are at a competitive disadvantage), the information regarding overall importance from categories one and two (issues of high and low stated importance) can be also used to prioritize the areas that will be addressed first.

The ultimate goal of a customer satisfaction measurement effort is to increase customer satisfaction and increase profits. As a necessary complement to standard analysis practices in customer satisfaction measurement, regression analysis is key to obtaining actionable results from the effort. Based on these results actions must be taken and a company must have confidence that in targeting particular areas for improvement, they will be affecting overall customer satisfaction and thus have a positive impact on the bottom line.