Editor’s note: Colleen Currier is marketing resources manager at BASF Corporation, Mount Olive, N.J. Arthur H. Schultz is director of advanced analysis at RSVP Research Services, Philadelphia.

In customer satisfaction research, a frequent practice is to compare the satisfaction levels that customers say they experience in doing business with the study sponsor against their experience with several different competitors. Comparisons are made for overall satisfaction, and for each of several product and service characteristics (product quality, customer service, timely delivery, etc.).

Typically the respondent-provided ratings for each supplier are aggregated, using either an average or a top-box percentage. Then the aggregated values are compared. Below are the top-three box (scale of 1 to 10) percentages for overall satisfaction that were computed in a recent study of the suppliers of an industrial commodity.

Table 1

Based on the top-box analysis, competitor one was clearly the satisfaction leader, and competitor three was clearly the worst performer. The study sponsor and competitor two were essentially tied for second place.

Limitations of top-box percentages and averages

Let’s first describe just two of the limitations of top-box percentages and averages as measures of satisfaction, and then see how a simple alternative analysis can provide additional insights.

1. Top-box percentages and averages are aggregate values, and they lose the fine-grain detail that is available from each respondent.

Imagine that respondent 001 rated the study sponsor with a 10, and competitor one with a 6. Imagine further that respondent 002 did just the opposite, rated the study sponsor a 6 and competitor one a 10.

The actuality is that the study sponsor and competitor one each have one customer that regards them favorably, and one that does not. However, an average (for just these two respondents) would be 8 for the study sponsor and 8 for competitor one. Top-box percentages would both be 50 percent.

If we knew only the average or the top-box percentage, we would infer that the performance of the two suppliers is identical. We could not tell whether individual respondents gave similar ratings, or gave different and opposed ratings. For our example, the fact that the respondents hold different views is lost as a result of making the comparison after aggregating.

2. Top-box percentages and averages cannot deal simultaneously with the ratings of both “hard scorers” and the ratings of “easy scorers.” (We call this effect “scale bias,” that is, the tendency of some respondents to concentrate their answers in just one part of the provided scale.)

Imagine that respondent 001, a generous and easygoing soul, rated the study sponsor a 10, and competitor two an 8. The respondent prefers the study sponsor over competitor two, but both suppliers would benefit equally in a top-box computation.

And imagine that respondent 002, a severe and demanding individual, rates the study sponsor a 7, and competitor two a 5. Again, the respondent has a clear preference for the sponsor, but both suppliers are penalized equally in a top-box computation.

Finally there is respondent 003, a middle-of-the-roader. She rates the sponsor 9 and competitor two a 7. Finally we have a respondent who does what we expect: puts one supplier in the top-three box range and one outside.

We have three respondents, each preferring the sponsor, and each giving the sponsor a two-point edge over competitor two. But top-box analysis only sees one of them.

What to do?

Discrete satisfaction-gap profiling

We have developed a simple procedure that does not aggregate the respondent data. This procedure just counts. It makes no use of either top-box ratings or averages. It can be used to supplement comparisons of top-box ratings and averages, or to replace them.

We call this procedure process Discrete Satisfaction-Gap Profiling:

  • Discrete . . . because we do not aggregate the data before we make the comparisons between suppliers. Instead we look at each respondent’s ratings individually, one at a time.
  • Satisfaction-gap . . . because that is exactly what we are examining, the satisfaction gap between the study sponsor and the competitor.
  • Profiling . . . because the end-result is a three-value profile that describes the relationship between the study sponsor and each competitor.

Discrete satisfaction-gap profiling has several benefits:

  • It retains the full information content of the data set, and can thus can provide insights not available from averages or top-box percentages.
  • It avoids all “scale bias” effects.
  • And, perhaps the biggest advantage of all, it is unequivocal.

Each respondent has weighed the two suppliers and made a choice. There is no need to explain the use of sophisticated statistical tools to the clients. Like a boxer or a basketball team, their company has a record: win, lose, draw. End of story.

These are the steps:

1. For each respondent we compute a satisfaction-gap between the study sponsor and the competitor. The satisfaction gap is the difference between the satisfaction rating the respondent gave the study sponsor and the rating the respondent gave the competitor.

Our convention is always to subtract the ratings of the competitor from the study sponsor. Thus if respondent 001 gave a rating of 10 to the sponsor and 7 to competitor one, the satisfaction gap was 3. If respondent 002 gave a rating of 6 to the sponsor and 8 to competitor one, the satisfaction gap was -2.

2. Next, we classify each respondent’s gap as being favorable (to the sponsor), a tie, or unfavorable.

To classify the gaps, we must first make a key assumption:

- Gaps of +2 or greater are favorable for the sponsor.

- Gaps of +1, 0, or -1 are ties.

- Gaps of -2 or worse are unfavorable for the sponsor.

This assumption is arbitrary, but we think sensible. Other classifications might be appropriate under other circumstances. If a scale other than 1 to 10 is used, some other scheme is necessary.

3. Then we compute the percentage of respondents that gave the sponsor a favorable satisfaction gap versus competitor one, the percentage that gave a tie versus competitor one, and the percentage that gave an unfavorable gap versus competitor one.

We repeat this process for each competitor.

An application

We’ll examine the results of applying discrete satisfaction-gap profiling to the suppliers of the industrial commodity mentioned earlier.

Table 2

The end result is a table similar to the one shown. (We have included the overall satisfaction top-three box data shown earlier for comparison, although these values are not used in the profiling. Since a gap is the difference between ratings of the sponsor and each competitor, the row in the table for “sponsor” is empty.)

The three rows in the satisfaction gap section of the table provide the profile for the sponsor versus each competitor.

Some things have not changed. The top-box analysis concluded that competitor one was ahead of the sponsor in providing overall satisfaction. Discrete satisfaction-gap profiling supports this finding. Versus competitor one, 30 percent of the respondents who directly compared the sponsor and competitor one preferred competitor one; only 13 percent preferred the sponsor. (And 58 percent saw little difference between them.)

Likewise, the top-box analysis concluded that competitor three trailed the sponsor, and the satisfaction gap profile supports this conclusion. Thirty-nine percent of the respondents who directly compared the sponsor and competitor three preferred the sponsor; only 15 percent preferred competitor three. (And 48 percent saw little difference between them.)

But some information is new, and startling. For one thing, roughly half (48 percent to 59 percent) of the respondents see the sponsor and the competitors as tied in providing overall satisfaction (by our definition of ±1 rating point equals a tie). This is true regardless of the competitor to which respondents compared the sponsor. The fact that for every competitor, half or more of the respondents were about as satisfied with the sponsor as with the competitor could not have been inferred from the top-box analysis. This finding has many critical implications for sales management and advertising.

Further, the top-box overall satisfaction rating implied that the sponsor and competitor two are tied in the minds of the respondents (top-box ratings of 56 percent versus 52 percent). Yet the profile of the sponsor versus competitor two was quite good; about three times as many respondents favor the sponsor as favor competitor two (30 percent favorable comparisons versus 11 percent unfavorable).

Table 3

(The reason for the failure of the top-box comparison to detect this fact was because, for 57 percent of the direct comparisons of the sponsor and competitor two, the ratings for both the sponsor and competitor two were either both within the 8, 9, or 10 top-box range, or both outside it. Comparing top-box percentages detects a difference only when one rating is within the top-box range and one is outside it. For this set of data, using a top-box percentage comparison effectively ignored 57 percent of the comparisons the respondents provided.)

Using graphs

Although we did make our supplier comparisons before we combined any results, we nevertheless did then lose some detail when we combined gaps into just three categories - favorable, ties, and unfavorable.

We can turn up the magnification of our analytical microscope by looking at all the gaps. A table can present this data, but we have found that a chart is more quickly grasped by client management. The chart looks more closely at competitor two.

Here we learn more. The entire 11 percent of respondents who favored competitor two over the sponsor all did so by a difference of only two rating points (in the rating scale of 1 to 10). But the 30 percent who favored the sponsor mostly did so by a difference of three or more. Thus graphical analysis reinforces our conclusion that, although the top-box ratings for overall satisfaction indicate that the sponsor and competitor two are tied for overall satisfaction, the sponsor does in fact have a perceived advantage over competitor two in the minds of those respondents who rated both (but always remembering that 59 percent of those respondents saw little difference between the sponsor and competitor two).

Disadvantages of discrete satisfaction-gap profiling

Does discrete satisfaction-gap profiling have any drawbacks? We see two.

1. The first is that the work of analysis increases. Simply comparing averages or top-box percentages and noting who comes in first is a quick and undemanding procedure. The computer provides a sorted list, and the analysis is finished.

Discrete satisfaction-gap profiling requires that a thoughtful analyst put some time into examining all of the three-value profiles, and that the analyst also in some cases looks at the underlying frequency distributions. Increased effort equals increased cost.

2. The second is that the data can get rather sparse. The study under discussion had 150 respondents from a survey candidate list of about 450, a typical mid-sized study for industrial work. (Many industrial studies have a survey candidate list of under 100, and total respondents numbering in the 20s or 30s.) To keep the phone interview to a reasonable length, each respondent was asked to rate only the sponsor and two other suppliers. As a result of this design, the number of direct comparisons available for discrete satisfaction-gap profiling was about 50 for each competitor. We think this level is adequate, but would be concerned should it be much lower.

In summary:

  • Comparing averages and top-box percentages to learn which supplier has the satisfaction advantage can lose some of the information the respondents have provided.
  • The cause of this loss is that averages and top-box percentages make the comparison after the data has been aggregated, losing the information on perceived satisfaction gaps that is available when we look at each respondent individually.
  • Counting the perceived satisfaction gaps reported by each respondent keeps this information, and provides a deeper insight into the nature of satisfaction differences among competing suppliers.
  • Discrete satisfaction-gap profiling provides a more unequivocal description of customer preferences among competitors than can averages or top-box percentages.