Editor’s note: Terry Vavra and Doug Pruden are partners at research firm Customer Experience Partners. Vavra is based in Allendale, N.J. Pruden is based in Darien, Conn.

 
With the ubiquity of customer satisfaction programs these days, one would expect overall satisfaction levels to be continually improving. However, according to the American Customer Satisfaction Index (which has monitored national satisfaction for the last 20 years) overall levels haven’t increased as one would have expected – given all of the attention being directed toward the customer experience and satisfaction. So what’s going on?

Wide range of explanations

There could be a wide range of possible explanations for the apparent disconnect between the intensive interest in customer satisfaction measurement and measured levels of satisfaction. Likely suspects include:

• Satisfaction programs are measuring the wrong things. We’ve found far too many satisfaction questionnaires overwhelmed by process owners who want feedback on their processes – whether it’s a key driver of customer satisfaction or not! In such cases the questions asked of customers are an operational litany: “Did we pick you up on time?” “Was your room’s temperature comfortable?” “Did the sales clerk offer you an extended warranty?” And on and on. Astute observers will recognize that internal vehicles for answering such questions already exist in the system; one doesn’t need to burden customers with observing the known.

• Many programs don’t feed information into action-planning processes. Sad but true, if a satisfaction program is simply “window dressing” in the C-suite, the insights from the program are likely not to be discussed with customer-facing departments and employees. As a result, nothing really changes. Action plans provide a very disciplined process for reacting to deficiencies (and sufficiencies) in a formulated way. Most importantly they specify which departments are responsible for rectifying each measured performance issue; they assign ownership of problems. With ownership come observable improvement initiatives.

• Maybe the customer experience is more complex than can be addressed in the structure of the current satisfaction questionnaire. As a result, some of the highly impactful experiential components that compose and help to explain touchpoints are missing. We believe that satisfaction is dependent upon many more factors than are customarily included in the conventional satisfaction questionnaire. Inhibiting inclusion of such broadening factors is the unfortunate desire to perpetuate certain measured issues. To us, once satisfactory performance has been demonstrated (through repeated measurements), an issue can and should be removed from the satisfaction inventory, allowing new, unexplored issues to be added.

These possible explanations suggest the typical satisfaction measurement sponsor may not be doing enough. It’s necessary for a satisfaction professional to not only compose the best questionnaire possible and oversee its distribution, completion and return but also to check if it’s really addressing the right issues.

Can you prove you're measuring the right issues?

When a corporate employee (identified as the sponsor/owner) of a customer satisfaction program presents findings of the process to colleagues and management, one of the first questions he or she is likely to be asked is, “How do we know this information is accurate?” Questions may continue with, “Why should we trust (and act on) it?” Passing an intuitive check is one way to confirm this but more and more companies are turning to formal validation programs.

To answer these questions, the sponsor can rely on an appeal to face validity – i.e., “We identified what we believed to be the most important issues and then asked our customers to evaluate us on those issues.” But more astute sponsors and many reliable vendors will expend greater efforts to objectively test the accuracy of the data they’re collecting. Testing the accuracy of the data can be called a validation or confirmation process. This will generally involve correlating the satisfaction information with some other information: information collected independently of the satisfaction data.

When you come to the fork in the road, take it!

There are two basic ways to validate your satisfaction survey findings:

1. Test the correlation between related information gathering systems (e.g., correlate your customer satisfaction results with mystery shopping scores you may also collect). We’ll call this a process of internal validation.

2. Or, correlate satisfaction findings with actual business outcomes from your business (profitability, sales growth, customer loyalty, etc.). We call this a process of external validation.

The notion of internal/external refers to the target of the correlation. Validating a survey process (customer satisfaction) with another survey or observation method (e.g., mystery shopping) is considered internal validation because both the subject (customer satisfaction) and the validation target (mystery shopping) are of the same ilk – survey processes. On the other hand, when a survey process is compared to a business outcome, we consider this validation method to be one of external validation because the validation target (comparison data) is of a different variety, e.g., actual business results.

Maybe you have not yet been challenged to prove that your research findings are truly valid, but if you hope to see them lead to corrective actions and/or if you are spending a budget of any meaningful size, then you likely will be.

While some consultants might tackle validation with high-powered statistical procedures (regression analysis is a favorite), such procedures are difficult for rank-and-file managers to understand and to communicate to others. We’ve successfully used a much more intuitive process to test the validity of satisfaction surveys. It's easier to do and it's more easily understood by others. The method uses a simple classification table.

A classification table is nothing more than a tabular form of correlation. If a group (operational divisions, stores, etc.) is divided (high to low) on two different measures, how do the units distribute among the cells in the table? In a two-by-two table, absent any relationships, approximately one-quarter of the units should be classified into each of the four cells. This case (of equal cells) illustrates a situation of total independence of the two measures. If however, two or more cells differ significantly from 25 percent, then the two measures can be seen as related or in agreement (i.e., correlated). Using actual data from a past engagement with an operator of health clubs, here are two sample classification tables for both the internal and external validation processes.

Internal validation

First, an example of internal validation, using two sources of survey data: overall customer satisfaction (from club members) and overall mystery shopping scores (from professional mystery shoppers). Tallying the percentage of clubs who were scored in the top half of both measures (31 percent) plus the number scored in the bottom half of both (29 percent) yields an agreement score of 60 percent. It’s clear both measures share agreement on which clubs are best and which are worst. One could explain this by saying if you knew the satisfaction score a club received, six out of 10 times you could correctly predict whether its mystery shop score was in the top half or bottom half of all mystery shopping scores. This classification table internally validates the satisfaction measurement process.

External validation

Let’s explore how external validation works. We also had access to actual member-loss (attrition) data for the sample of 162 stores/units. We composed a similar classification table displaying customer satisfaction rankings versus rankings in attrition data (lost fewest members to lost most members). The customer satisfaction process is again validated because the cell percentages in the top-top and bottom-bottom cells are both greater than the chance outcome of 25 percent.

 

Which approach is right for you?

There are a number of approaches that will allow you to validate your customer satisfaction process. At one time or another we have executed them all. But we've observed the greatest acceptance from a process that is easily understood and easily visualized by management. It also helps if the process doesn't rely too heavily on researcher math and jargon. Most of all, as a satisfaction process owner one should personally know how valid one's program really is. Take the time to find out – before your management demands proof!