Editor’s note: Dennis Murphy is vice president of the technology practice at Directions Research, Cincinnati. Chris Goodwin is a vice president at Directions Research. This is the third of a three-part series of articles. Parts I and II appeared in the July and August issues.

This series began back in July with a postmortem on customer satisfaction measurement’s failings. We grouped our comments under four headings: insubstantial theory, haphazard execution, measurement confiscation and inappropriate application.

In August, the second article underscored the consequences resulting from a myopic or misguided assessment on customer satisfaction. This third and final article explores how to resurrect the discipline though a combination of repositioning and the use of a rescue-kit of tools - some new approaches and some other, oft-forgotten but still sound research practices.

Our prior critiques may have seemed harsh, but they weren’t meant to imply that no one is getting customer satisfaction right. In fact, the original drafts of all three articles were written by one of the authors from the cabin lanai of a cruise ship (yeah, my wife thinks I’m crazy too). The final morning of the trip I was pleasantly surprised by the customer satisfaction survey that was slipped under the cabin door. Here’s what the cruise line did right:

•   The survey itself made it absolutely clear that the organization’s purpose was understood: your satisfaction is a means to their end - they want to sell you your next cruise.

•   It was equally clear what the recipient benefit of participation was - your next cruise would be even better synchronized to your needs than the current one.

•   The execution instrument itself didn’t make the customer feel as if they were the one being executed.

•   Finally, the cruise line had early on demonstrated excellent customer service by handling a couple of small issues flawlessly, so it was clear that the survey information was for improving customer satisfaction - not just customer satisfaction scores.

As we build a case for “doing it right” you’ll hear a bit more about the cruise. Here goes.

Insubstantial theory

Best practice #1: Always embed a sales surrogate in every survey.

A brand choice exercise usually works well. Even better, include actual customer revenue. (Note: We realize that this sometimes requires taking on the database team, the privacy officer, network security personnel, lawyers and some sales guy named Joe who always wants to protect only his customers from getting surveyed. But the results are more than worth the effort - except maybe for poor Joe.) This practice applies to all research, not just customer satisfaction. Explanatory information doesn’t exist without something meaningful to explain.

With rare exceptions, the goal of business behavior is to sell - plain and simple. This means that most any survey should have a sales surrogate embedded as the most essential question. If you’re trying to explain anything other than a financial performance measure (sales/revenue/profit) you’re dealing with auxiliary objectives - not the bottom line.

In research-speak, the dependent variable is the idea we’re attempting to explain, or, in a business sense, the result we are trying to achieve. The independent variables are the myriad items we’re using to predict and explain the result. Too often organizations fail to embed a cogent dependent variable and forfeit the real power of research. What we’re left with is little more than an impotent shopping list and a series of net scores. If we don’t have a meaningful result to predict, we just have nice-to-know factoids. It’s like driving a car so you can read the road signs.

Though sometimes done innocuously (we forgot why we got in the car to drive in the first place), often naïveté and laziness are the culprits.

Observation: It is a “research sin” to design a survey that lacks something to be explained. A survey without a dependent variable is a trip without a destination.

Now here’s the kicker: Satisfaction should not be a dependent variable, or at least not the only one!

Bonus observation: A survey without a sales surrogate is a pistol without bullets.

Best practice #2: Demand that customer satisfaction prove its mettle.

Customer satisfaction deserves “a seat at the corporate table” if and only if it has earned it. Now that we’ve incorporated a sales measure (#1), make sure you examine whether the assumption that customer satisfaction impacts sales is a valid one. If it is, great; on the other hand if satisfaction doesn’t matter - or matters minimally - maybe you have better places to put your money.

While we want to understand what drives satisfaction, we need to understand what drives sales. Therefore, we should think of customer satisfaction as an intermediate or auxiliary variable in this more global effort. Think of a hierarchal diagram - a battery of factors drive these intermediate/auxiliary factors and, in turn, the intermediate/auxiliary factors drive overall performance.

Customer satisfaction is an auxiliary component. As an intermediate-level result, it is a potential contributor to the highest-level result: the organization’s financial performance. If you fail to incorporate the latter, you’ll be erroneously explaining satisfaction as an end unto itself.

So what’s wrong with that, you ask? Here’s what: What if satisfaction has little or no impact in sales?

If there are no alternatives to your product, or the costs of switching are incredibly high, then satisfaction may not be a significant driver of future sales. Think of financial accounting systems or small-market airlines as examples of limited choice. In the short term, customer satisfaction may be irrelevant in these decision environments.

Haphazard execution

Best practice #3: Practice KISS and think of it as meaning “Keep it short, stupid.” A responsible analyst asks exactly what he or she needs, and not one thing more.

Customer feedback is a gift, one bestowed upon a respectful request. If we recognize this process as a request rather than a demand, then at least we’re getting the relationship with our respondents right. Our mothers taught us to ask nicely, so let’s put mom’s lessons to practical use.

Brief and easy surveys seldom meet all of our client specifications, but if clients fully comprehended the penalties assessed for long and complex surveys, they likely would reconsider. A conference speaker once described the ultimate customer satisfaction survey as a single question: “How did we do?” What we like about this approach, beyond its obvious simplicity, is that it hits dead on a simple truth. It allows the customer to tell you what they think is important rather than responding to what you tell them you - the client - think should be important.

Now we’re not so naïve as to believe that we can get off as easily as asking “How did we do?” (although we could create a “net doer score” or NDS), but we do strive for simplicity and brevity. The “What can we cut?” mentality produces better work than the “What can we add?” approach if for no other reason than it lessens the customer burden which in turn holds the customer’s attention.

Best practice #4: Make the survey beneficial to the customer.

You can’t always make a survey directly (we’ll tell you the results) or indirectly (you use our products) beneficial but when you can, it pays dividends. When surveys are sponsor-identified, response rates increase because a personal connection has been made. This is not always possible - and not always even desirable - there are times you must avoid any kind of identification bias. But when it is feasible, it does enhance engagement.

No one we know jumps out of bed in the morning planning their day around all of the surveys they can take - or if they do then let’s agree that they’re weird. Most of us have been subjected to far more time-consuming and dull surveys than to interesting and enlightening exercises - way more - and we seldom see “What’s in it for me” beyond perhaps some modest remuneration. And even then isn’t this really closer to bribery than an enthusiastic contribution on the part of respondents?

Observation: If the customer first understands why we solicit their input and second, how they might actually benefit, then their involvement increases exponentially.

Recall our cruise. My wife filled out a survey which in other instances she would have tossed. She got that the cruise line cared and the task was manageable for her.

Measurement confiscation

Best practice #5: Protect respondents by representing their interests. Surely no one else will.

Being “the voice of the customer” means balancing your clients’ desires with the respondents’ capabilities. Adding one more question because you “were told to” isn’t being a professional, it’s being a clerk.

Measurement confiscation happens when everyone wants a piece of your survey. They only want 30 seconds here, 60 seconds there or “one” question that ends up having 10 parts. These questions can’t help but provide a disjointed structure which disrupts any kind of continuity as the respondent attempts to do their job.

Observation: The more sense the client’s questions makes to a respondent, the more sense the respondent’s answers make to the client.

There is no better way to lose the interest of a respondent than to ask them about things they have little interest in. If they get so bored that they quit, that’s bad; if they get bored yet push on, providing random garbage, that’s even worse. And you seldom know when that has happened. You have to think about your customer as much in research as you do in sales.

Observation: We design surveys so that customers can tell us what we want to hear. Shouldn’t we be constructing them so that customers can tell us what they have to say?

Measurement confiscation and the next challenge, inappropriate application, start to move us into the realm of “political” or “organizational” challenges. This is where people who know nothing about research and who are usually not directly responsible for the bottom line of the business add to or repurpose surveys. If CEOs knew - really knew - how much the instruments they need to run the business were being undermined by well-meaning but wrong-headed executives, the head of market research would report directly to the president!

Okay, once we set aside our delusions of grandeur, we recognize, of course, that the CEO and/or president has far more important things to do, like personally convincing your largest customers to become larger customers. It is you, the researcher, who must fight against measurement confiscation alone, and this is not necessarily the kind of battle that most researchers are taught to confront.

This is one reason why top market research executives are often imported from other disciplines. These folks may not be the greatest researchers but they know the business. Said otherwise, they’re great business folks who understand research. We know, we know: It’s terribly unfair and ironic that in a profession built on hard scientific and mathematic skills, it’s the softer personal skills that so often lead to promotion.

Best practice #6: Create interchangeable modules in tracking surveys.

All longitudinal studies (brand tracking, customer satisfaction, etc.) eventually succumb to the “no new news” problem. Isolating and maintaining the core measures and then varying the modules adds long-term vitality by always providing a source of “new news.”

So, what kind of tools can you use to keep your survey from being hijacked by every executive with scant information and no research budget? Most tracking surveys, especially customer satisfaction surveys, include many questions that change little or not at all from month to month or quarter to quarter. For example, in a quarterly tracker, it is usually only necessary to launch the full boat of questions once a year, freeing up that survey space for the other three quarters. One of the biggest absorbers of space can be the need to track factors of large driver models. It’s possible to do that by just tracking the top attributes.

Technically/methodologically/statistically, it’s not necessary to include all those questions in every quarterly version of the survey. If you are doing a quarterly tracking survey, chances are that some core metrics make their way into an executive scorecard or market summary that gets shown to the boss every quarter. We wish we could help you there, but the tracking survey is probably being paid for mostly because of that one page of metrics. Live with it.

As for the rest of the questions: Put them into modules, short groups of questions that can easily be moved in and out of the survey on a moment’s notice. With a little bit of advanced planning at the beginning of the design process, programmers, data processors and tab generators can create a flexible structure that allows these modules to be utilized.

Observation: When designing any tracking system think about a Lego set.

We’ve all had trackers where a one-off question becomes a permanent part of an ever-lengthening survey. Now here’s the tough part: Bargain hard. If you absolutely can’t resist the pressure of an executive who wants a short set of questions in your ongoing tracker, don’t let it become a permanent part of your survey. Give them a module for one wave so that there is a mechanism for putting in and taking out the questions. By the way, feel free to use this on other studies besides customer satisfaction.

Best practice #7: Make all research research on research.

Our trade is about learning. Try new things. Experiment. If you believe market research is nothing more than applying your college course work, you’re frankly not enhancing your profession.

Now, what about those demands for questions to be a permanent part of your survey? Your best tool for fighting these demands is to get hard data on what is relevant and what is not, and you can do that with the portfolio of research you have today. You need data that shows a) the detrimental impact of unfocused surveys and, b) what is truly relevant to satisfaction.

We could (and people have) written whole books on what you need, but let us make a few suggestions. Some of this data is already available in generic form, but we find that only data from your own customers is relevant to the executives you are trying to influence:

•   Chart the dropout rate by time with your audience to argue for shorter surveys.

•   Chart the increase in cost per interview with your audience as surveys get longer.

•   Run correlations for all questions to your dependent variables (see section above on what those should be). Keep a running list across studies of what is actually related.

•   For really important measures, like overall satisfaction, follow the closed-ended question up with an open-ended one: “Why?”

•   End every survey with this question, “Please tell us what you think about this survey.”

•   Ask respondents what they think is important (stated measures), don’t just derive importance.

Best practice #8: Rediscover self-explicative (stated) importance.

Here’s what some might see as a curmudgeonly point of view: The advent of calculators diminished our math abilities and computers have damaged our hypothesizing. We just run every conceivable alternative. What we call derived importance is in reality nothing more than correlation and has supplanted stated importance. Give folks some credit and actually ask them what matters. You may rediscover that they know - and it makes a lot more sense than most derived answers.

The final item on the list for research on research has more power than it at first appears. There is always a natural tension between the client belief that “more is more” and the researcher’s experience that in fact “less is often more.” Traditionally, clients would demand that we ask for the respondent’s perspective on a whole bunch of brands over a whole lot of attributes. We’ve seen this matrix - 10 brands by 30 attributes - exceed 300 responses. Having all this data would be delightful except for the fact that respondents go brain-dead long before answering even a fraction of these queries.

We have a completely different approach:

a) We ask respondents to tell us which brands on a given subject are most relevant to them, not just the ones they are familiar with or the ones the client “thinks” they should know.

b) We give them a list of attributes upon which these brands might be evaluated and then rely upon the respondents to select those they consider most vital.

c) Each respondent creates through these first two steps their own unique matrix and it is simply this reduced set we ask them to assess.

Respondents will be answering more interesting questions and doing less work. More importantly, instead of having lots of noisy data from fatigued and bored respondents, we will have less data that has more meaning. If it sounds like responding to a limited matrix - brands you know on attributes you care about - might produce more thoughtful results, well that’s exactly what we’ve found. This philosophy, “relevant space,” developed in conjunction with our partner, Cisco, has been put to the test over four years now. If you want to read more about it, an explanation can be found in Wikipedia.

With these and other measures, hopefully you will have an arsenal of facts to help you in your battle to do customer satisfaction studies the right way. But, researchers can do great research and still face the next challenge.

Inappropriate application

Best practice #9: Lobby against inclusion of customer satisfaction in scorecards and compensation.

When personal gain begins to supplant customer well-being, the system becomes corrupt. There is nothing inherently wrong with including customer satisfaction in scorecards other than the measures lack actionability. The problem is more one of a slippery slope, with inclusion of the compensation formula following. This does lead invariably to bad behavior.

When the goal of surveying customers goes from pleasing the customer to pleasing the organization, you know that customer satisfaction has been inappropriately applied. This usually starts when a department or division starts using CSAT scores as measurements of efficacy or even a goal. As we wrote in the previous articles, if CSAT becomes part of the executive compensation scorecard, it’s extremely difficult to maintain an effective survey.

We are back to those “softer” research skills, where there are no hard-and-fast rules. Unfortunately, we can’t create a magic research technique to solve this problem. We’ve spoken at length in our two previous articles in this series on why you should avoid this. To recap:

•   A survey that was once intended to benefit the customer is now intended to benefit the organization, especially the executives being compensated based on the surveys.

•   Market researchers become the “police” by being in the position of creating the metric that becomes a judgment on the organization and its executives.

•   Ironically, just when research seems to be legitimized by having an audience with the highest decision makers, those same decision makers have the most incentive to question the skills of the researchers (especially if CSAT scores go down!).

Observation: Let’s put the customer back in customer satisfaction.

Best practice #10: Keep working to make customer satisfaction better - don’t set it and forget it.

As we know it today, customer satisfaction is still closer to an undiscipline than a discipline. Our intent is to introduce more rigor. We don’t pretend to have all the answers, but we hope to have stimulated a conversation that brings new vitality - and yes, new discipline - to customer satisfaction.

We started this series of articles by lambasting the current state of customer satisfaction research across a wide range of issues. In the second article, we detailed the negative consequences of those problems on well-meaning researchers and organizations as a way to identify areas in which a well-designed customer satisfaction program could have positive consequences. We hope that these articles will give you some arguments for keeping customer satisfaction out of the executive scorecard.

There are endless possibilities, changes and tweaks you can do to make customer satisfaction studies better. And the changes aren’t all methodological; some of the most important are positioning and political. But despite the savaging we gave customer satisfaction in our first article, we recognize it still deserves attention - first to determine if it does matter for your product category and then, if it does, making certain its contribution is understood and optimized.

Honestly, some things are in your control and others are not. Even if you still get your study stuck in the scorecard and compensation formulas, perhaps our articles have sensitized you to where to look for land mines. We hope this final installment has identified practices (some old, some new) in each of the four problem areas that can help you revive customer satisfaction and give it life. Not a new life, but the old life it had or should have had: delighting your customers, not rewarding your employees.Â