Editor's note: Joseph Duket is president of Q&A, Inc., a Smyrna, Ga., research firm.

As a marketing researcher, I have acquired a hobby of collecting comment cards whenever and wherever I find them. These innocuous little cards come in every size and shape imaginable, from official-looking trifolds addressed to "Chief Executive Officer" to simple index cards with a few lines for open-ended feedback. The one thing which almost all cards have in common, however, is that they paint a distorted picture of customer satisfaction.

Studies have shown that 26 out of 27 dissatisfied customers - 96 percent - never voluntarily complain. Yet companies from mom-and-pop to Fortune 100 still rely heavily on these cards for measuring customer satisfaction. Compounding the dubious reliability of comment cards, companies create further bias in the design of their rating scales. As shown in the chart below, one must wonder how confused customers are with this semantic hodgepodge.

With the exception of one company that didn't think highly enough of itself to warrant an excellent score (very good was its top rating) and another that chose to deceive itself by assigning below average as its lowest score, the only terms which seem to be universally accepted are excellent and poor. All other terms and ratings points in between are nebulous to say the least.

According to Funk & Wagnalls, the term excellent means "being of the very best quality." As a superlative term, it requires no qualifier or adjective to increase its impact. One person or company can not be more excellent than another. If you're doing the best job or providing the best quality, no one can do better.

For the word poor, however, Funk & Wagnalls uses the synonyms inferior and unsatisfactory in its definition. Confusion arises when the word inferior is described as "lower in quality, worth or adequacy; mediocre; ordinary." Mediocre is then defined as "of only average quality." So, taking this exercise in interpretation to the extreme, poor performance could actually mean an average rating.

Good is perhaps the most misunderstood term used in rating scales. What exactly is good performance when it comes to customer satisfaction? As clearly shown in the examples, some companies consider good to be synonymous with above average while others consider good to be the same as average.

Average, the term many firms use as their mid-point in the scale, can be misconstrued, as well. Average can be defined as the arithmetic mean (as in batting average) or a synonym for ordinary or mediocre. If used as the mean, who determines what average performance is? According to many people, the average customer service for Ritz Carlton hotels or Nordstrom department stores is excellent. And, on the other hand, the average (mean) service level for many fast food restaurants is poor.

Depending on which scale you're using, words can have far different meanings. On a three-point scale, fair can be synonymous with acceptable or satisfactory while on a four- or five-point scale it's comparable to below average or needs improvement. There is danger in using the term needs improvement for a "D" rating since employees and managers can look upon a "C" as good enough to not warrant improvement. Does the company only need to improve when it has reached the point where customers are defecting in droves? In the real world of customer service, any score other than excellent needs improvement.

Many companies, either consciously or unconsciously, stack the deck in their favor by employing rating scales skewed to the positive side. Such scales as:

Excellent Very good Good Fair Poor
Excellent Good Average Below average
Very good Satisfactory Unsatisfactory
Excellent Okay Poor
Excellent Good Poor

will obviously produce much higher positive scores than if more equitable scales were given or more relevant terms used. Companies can also lull themselves into a false sense of security as in the example of the company above which used below average as its lowest rating.

I can hear the customer service manager explaining to the CEO now, "Well, our customer satisfaction scores are below average but at least we're not failing."

The most disturbing aspect of this name selection process - and the one which has the most detrimental effect on business - involves the middle-of-the-road satisfaction score. Whether it's called average, good, fair, okay, acceptable or satisfactory, the fact remains that no business should accept such a rating as positive. At best, a "C" means the company is providing the bare basics of quality or service. Just as it means in the scholastic arena, a "C" student is doing just enough to get by - nothing more and nothing less. It's a marginal passing score that should not be considered acceptable if the business expects to retain customers and prevent defections. And, most certainly it should never be misconstrued to be a good score.

I don't know too many parents with high aspirations of their child attending a good college who would look at a report card of all Cs and say, "You're doing okay!" or "Good job!" And I can't imagine any proud parent telling a child with a D that he or she did "fair." Yet companies continue to pat themselves on the back for mediocre performance by attaching such words to their rating scales.

The bottom line for measuring customer satisfaction in the first place is the impact it has on retaining a customer's business. One suggested scale that provides not only more relevant meanings but their possible impact upon customer loyalty is shown here.

Companies should realize that customer satisfaction ratings can be easily biased by not only the methodology of data collection but semantics, as well. It's time we begin to inform our customers and employees what the ratings mean and how they are to be interpreted.