One goal, many drivers

Editor’s note: Peggy Wyllie is a founding partner, senior vice president and CFO of Client Insight LLC, a Boston research and consulting firm.

In an age of intense competition and shrinking customer budgets, businesses must maintain satisfied and loyal customers. Benchmarking customer satisfaction is a vital tool for helping companies learn where they stand with customers and what they need to do to achieve strategic goals. But without due diligence to identify what to measure and why, businesses can miss significant opportunities to understand the customer’s experience, capitalize on goodwill and correct problems, potentially preventing defection. To add insult to injury, they can be spending hundreds of thousands of dollars while missing the mark.

Based on our experience, the key steps in benchmarking customer and employee satisfaction can be boiled down to: understand, define, detail and monitor.

Many touchpoints

Customers have many touchpoints with your company, including sales, product, support, advertising, billing and administration. It makes sense to understand to what extent their experiences with these functional areas are driving overall satisfaction with and loyalty to your organization.

For example, consider the case of one company and its customer call center. The firm had several metrics in place to continually monitor efficiency in fielding thousands of calls daily from a diverse customer base. These included the number of rings before a call is answered by a representative, the number of minutes (or, ideally, seconds) the rep remained on the phone with the customer, and the number of calls each rep was fielding each day. While these metrics delivered valuable tracking data for internal purposes, they excluded the most important viewpoint - the customer’s.

Subsequent customer satisfaction research revealed that, while a prompt answering of their call was important, other factors trumped it in terms of influencing satisfaction. These include a perception that the representative was knowledgeable and able to adequately resolve the issue, and that customers weren’t being “processed” and handed off to repeat their story ad nauseam in an attempt to fix the problem.

And don’t forget the initial frustration with the product itself that prompted the call. Each interaction with the company is a building block for satisfaction and loyalty, so it’s important to understand the linkages between them.

Understand changing workflow

Particularly with continually evolving technology, customers may be finding new ways of doing their work - which may or may not involve using your products. An excellent means of understanding changing customer workflow is the in-depth contextual interview. Conducted at the customer’s workplace, the interview is a guided discussion that touches on a number of key topics, including:

• what your customers need to do on a daily basis and the tools they rely upon;

• the extent to which they use “work arounds” to make up for deficient products or tools;

• top-of-mind stories of their experiences with your products and people;

• awareness, perceptions and use of direct and indirect competitors to your products;

• expectations for changes in their workflow that may offer new opportunities for you to serve them.

Observing how customers do their work with your (and your competitors’) products yields valuable insights about how your offering helps get the job done, core expectations for your products and services and, most importantly, what would delight your customers. All of this information is critical for building an actionable satisfaction measurement program.

Define key metrics

Involving your key people from various functional areas in customer interviews paves the way for a consensus in understanding the customer’s world, which in turn makes it easier to collaborate across functional boundaries in the pursuit of an integrated customer-focused strategy. In addition, the rich qualitative data gathered in the interview phase can ensure that you’re measuring performance and attributes that matter most to customers.

As an example, consider the company that continually measured the extent to which customers view it as “innovative” in satisfaction studies. For the salesperson, innovative might mean new or flexible pricing structures. For the account support representative, it might mean a new self-service customer Web site. For the CEO, it boils down to more frequent product releases than the competition.

For the customer, though, it can mean all of the above, something different, or nothing at all. That’s why it’s important to use both internal and external data to determine specific objectives for satisfaction measurement and definitions of the attributes measured. For each objective, the team should have an idea of the action that they can take based on the data. Using the above example, it’s much easier to form a plan to address waning satisfaction with the ability to self-service one’s account than it is to address general dissatisfaction with the level of innovation vis-à-vis competitive choices.

A helpful exercise in defining key objectives is to gather the cross-functional team in a room and, using Post-it Notes, group and prioritize objectives. Not only does this get the team thinking about what they need from the study, it provides a blueprint for the survey instrument itself. For example, if a key objective is to understand differences in product satisfaction based on the length of time customers have been using it and the frequency with which they rely on it, the researcher knows right away to include relevant background variables in the sample and questions in the survey to ensure that sufficient data is collected to conduct these analyses. Likewise, openly sharing all possible questions at the start makes it easier to determine what to drop from the study should there be space or time constraints.

Drill down

Arriving at sufficiently detailed data to meet objectives is a function both of the questionnaire and the analysis plan. While the executive team may be most interested in the overall percentage of satisfied customers (the “magic number”), it’s critical to drill down and identify what’s driving satisfaction and loyalty.

You don’t get there by asking customers how important different attributes are to their level of satisfaction. People have a much harder time in differentiating an abstract concept like “importance” than they do in rating how satisfied they are with performance.

A strong tool for prioritizing where management action is needed to address satisfaction shortfalls is derived importance analysis, which correlates overall satisfaction to satisfaction with underlying features. Take the example of an online information product. Customers are asked their overall satisfaction with the product and then asked to rate satisfaction with a number of product features, like ability to easily locate information via search functions, timeliness of posting new information, ability to manipulate the information, etc. The resulting derived importance analysis yields a two-by-two matrix that clearly identifies where satisfaction is not meeting perceived importance levels. It also shows where customers may be oversatisfied based on the importance of the attribute. Thus, at a glance, managers can target resources for maximum benefit to customers. For further detail, derived importance analysis can be done for customer subgroups including geographic location, spending tier and functional area.

Analyzing the drivers of satisfaction and likelihood to refer (a strong proxy for loyalty) helps to unite the details and, in pursuit of understanding, present a more holistic view of customer experiences. Let’s look again at the call center satisfaction benchmarking. Loyalty driver analysis showed that call center attributes like depth of support staff knowledge and timeliness of issue resolution were joined by non-call center attributes like satisfaction with products, product training and the ability of the sales force to understand needs, underscoring the idea that no function is an island.

In this case, the fact that satisfaction with the call center and the likelihood to refer remained fairly constant despite significant downsizing/reorganization of the call center and increased product dissatisfaction is a significant achievement by call center staff. Viewing the call center satisfaction data in isolation would have painted a different and less accurate picture of the customer experience.

Early indications

Most companies benchmark satisfaction with frequencies ranging from one to three years. Much can happen in the meantime, though, which is why it’s important to track ongoing customer interaction ratings. Using data from benchmarking studies, companies can build and conduct brief, frequent surveys among small samples of customers to track satisfaction with drivers and provide early indications of issues that should be addressed before they grow into deal-breakers. A feature to add to these pulse surveys is an automatic alert function that notifies a point person should a customer register dissatisfaction with any element of the service they received. This allows for early intervention with the customer and faster resolution of problems. It also provides a forum for fluid measurement and adjustment of processes for product training, new release rollouts and communications, among others.