Customer satisfaction research from the Quirk’s archives
Editor's note: Pro Insights will offer a handful of tips and commentaries on a single topic, drawn from our vast library of past magazine and e-newsletter articles. Let us know (joe@quirks.com) if there are subjects you’d like us to tackle. As always, you can start your own search by accessing these and thousands more articles for free at www.quirks.com.
Don’t overlook employees’ role in achieving customer satisfaction
A sometimes forgotten component of customer satisfaction is the part your employees play in building – or destroying – goodwill among customers. Especially in this era of viral outrage, when anyone with a phone and a social media account can lay waste to an entire company based on one employee’s alleged misstep or misbehavior, strengthening your frontline is one of the most effective ways of strengthening your bottom line. In his October 14, 2013 e-newsletter article, “Why employee research is critical to customer satisfaction,” Michael Vigeant offered a quick overview for companies wondering what to cover in employee research.
When employees are happy, driven and empowered, customers sense these traits and respond favorably. If the customer service representative can go above and beyond to provide issue-resolution immediately without having to go through several other managerial channels, the customer is the ultimate winner. When an employee sits down each morning with a smile before reaching out to clients, his optimistic outlook becomes a company trait.
So how do you assure employees are putting their best face forward? Research. Ask them what is working, understand their pain points and respond. Utilizing employee satisfaction surveys, companies can gauge internal potential for both success and decline. Employee satisfaction data measures the following components:
Key motivators. Maybe money is not what your employees need for validation. Other options might be growth, company culture, encouragement, leadership or education. Without asking what drives these individuals, how can you ever meet or exceed their expectations?
Levels of empowerment. Employees who feel armed to help customers are much more likely to follow through. When employees feel confident they can handle situations, customers get immediate results and validation that the organization wants to help.
Company education. A broad look at what employees know about the organization and offerings will provide insight into additional skills or information needed and highlight employees or departments best prepared for next steps and responsibilities.
Basic needs. When employees feel well-equipped, they can provide for customers more quickly. Whether office supplies or training, knowing what employees lack allows management to respond.
Communication preferences. The HR department spends hours a month on the internal newsletter – is anyone reading it? What about the Friday reminders from accounting? Understanding how and when to reach out to employees allows your organization to cut out costly ancillary tasks and focus on efficiency.
How to help customers avoid death by a thousand surveys
Though their article, “Researchers, has your company become CSM crazy?,” is from 2009, Doug Pruden and Terry Vavra’s argument that companies are over-surveying their customers is probably truer now than ever, given the proliferation of mobile phones and other digital ways for companies to solicit feedback after every interaction, no matter how small. They offered four issues to consider before hitting the send button on that survey.
We believe that it’s time to establish some better rules for when – and to whom – to administer a CSM survey. As a start we think management should address four key issues:
Determine the rationale behind your information need. We’ve always advocated there are at least two distinct types of satisfaction surveys: transaction-driven (a follow-up to a specific action or process in which the customer has participated) and an annual assessment (a periodic pulse-read on the general customer base seeking their overall satisfaction with the relationship they’ve established with the organization). Each of these survey types deserves its own guidelines dictating when and to whom questionnaires should be distributed. In other words, thought should be invested in the process before the questionnaires are unleashed.
Drive your CSM efforts with a database. One of the root causes for the plethora of CSM invitations we all receive is the relatively mindless way invitations are distributed. All CSM efforts should be coordinated with a database of customers. This means that annual or periodic assessments will not be conducted as a census activity but rather with a specifically identified sample of customers.
Assuming the number of identified customers is relatively large, we suggest cycling through the customer base so that customers are only contacted once every two, three or four years. Most corporations have a large enough base of customers to allow split-sampling, tri-sampling or an even greater division of the customer base so as not to sample the same customers year-in and year-out.
Customers can be segmented so that specific groups of customers (high-value customers, newly acquired customers or declining-activity customers) are targeted in a specific year’s invitations. A controlling database needs to be maintained from year to year with previous years’ invitees and respondents flagged. This system can serve to protect customers from being oversurveyed.
Don’t treat all interactions equally. Another action we’d encourage is to urge companies to reserve CSM follow-ups for significant interactions. For example, a customer’s simple request – one that should be easy to fulfill in under two minutes – probably shouldn’t trigger a survey invitation. Unless the experience was a complete disaster, nothing useful will be learned. Sending an invitation for such a minor interaction might suggest to the customer a lack of concern for their time and as a result be detrimental to the relationship with the customer.
Respect your customers' time – don't ask for known information! Far too many CSM surveys these days are retrieval missions for information that corporations already know and could easily link to customers’ records. But, rather than enhancing databases and appending the known information, they often take the lazy solution and rely on their customers to volunteer the information. This is often the result of siloed information and activities.
Equip managers with what they need to act on satisfaction data
Echoing the earlier entry about the critical role that employees have in delivering a top-notch customer experience, an October 2011 article by T.J. Andre and Jeff McKenna (“Line managers explain why they don’t see much value in satisfaction measurement programs”) made the case for giving managers the right rationales and tools to implement the findings from customer research.
Over the past several years, in hundreds of conversations across industries and sectors, we have heard the same thing consistently: Line managers are unable to use their customer experience measurement results to improve business outcomes.
Step back and think about that for a second. Thousands of companies spend tens, hundreds of thousands or even millions of dollars to collect, analyze and distribute the results of customer experience measurement programs. And then, line managers across those organizations say these programs don’t help them take action to improve the business. Ouch!
These aren’t just informal observations either – they echo a more formal study we conducted with 150 line managers in a cross-section of companies and industries. In the study, we asked managers at all levels of organizations about their experiences using customer satisfaction results – what problems they experienced, how frequently and, for each of the problems they cited, how bothersome it was when they occurred. The results validated the common and very painful realities we have been hearing in conversations with the managers who use customer feedback programs.
More specifically, they complain that: they have no clear idea of what needs to be improved and no clear idea of how to improve; it’s hard to identify the most important information; senior managers are unable to apply information to decisions; frontline staff is unable to apply information to decisions; and frontline staff is unable to understand the results.
Why so much pain and so much money wasted on deliverables that go unused? In a nutshell, it’s because traditional programs are good at keeping score but bad at providing the support and tools that help managers act on the insights gained.
The source of this problem can be traced to the different mind-sets that companies bring to their customer experience measurement programs and the strengths and shortcomings associated with those mind-sets. You can categorize customer experience measurement programs as analytic/data-centric or customer transaction-oriented.
The analytic or data-centric group is focused primarily on keeping score and the diagnostic analysis of data to make comparisons and identify trends. Programs that serve this mind-set tend to provide lots of data, both to staff analysts and to mid-level and frontline managers. Often managers find the information they get from these programs to be overwhelming. It’s both the sheer volume of data and the need for additional analysis that hinders managers from finding the meaningful conclusions. Further, they struggle to directly connect the data to actions that will drive improvement within their spheres of influence and responsibility.
The customer transaction mind-set, in contrast, focuses primarily on customer relationship management at the individual or location level. Their programs tend to be focused on identifying and closing the loop on individual customer service failures. They are characterized by a sense of urgency and action to maintain strong individual customer relationships through proactive service recovery. These programs tend to be much weaker at enabling diagnostic analysis of systemic problems, whether at a branch, regional, departmental or corporate level. This can lead to a management dynamic of addressing individual symptoms over and over while the underlying diseases go undiagnosed and untreated.
What is needed are tools that help line managers at every level act on the insights gained and enable senior managers to (fairly) hold their employees accountable for taking the actions that will have the most impact. The ideal customer experience measurement and management program would combine the strengths of each of these mind-sets while overcoming the weaknesses. Such a program would be characterized by a real-time close-the-loop service recovery system while also providing the analytical horsepower and tools to allow managers to easily see, diagnose, prioritize, plan and act on systemic customer experience issues.
Is having such a program even possible? And if so, can it be implemented at a reasonable cost? The answer to both of these questions is, fortunately, yes. So if the elements of the solution exist and can be combined cost-effectively, why is there still so much widespread pain around the lack of usefulness of customer satisfaction measurement programs? Once again, it comes down to mind-set. Many customer satisfaction programs are built and managed by third-party vendors or by internal market research teams. These folks, despite their marketing jargon and positioning as “business partners,” are in fact in the data business. And if you’re in the data business you deliver data. It may be very impressively packaged and displayed, but it’s still just data.
Those who understand and, most importantly, commit themselves to combining measurement techniques with technology to build tools that directly facilitate and support management action are the ones who will change the game.
Get execs’ attention by explaining CX findings in monetary terms
In his October 2019 article, “Getting the most from your CX marketing research,” frequent Quirk’s contributor John Goodman served up 10 best practices to apply across all forms of customer satisfaction or customer experience research. Many of them urged readers to tie the findings and accompanying recommendations to some tangible financial cost or benefit, as a way to ground action or inaction in the kind of monetary terms that seem to make management sit up and take notice.
Create an economic imperative that the CFO accepts. The monthly cost of inaction on each CX priority should be quantified, according to the market-at-risk approach, which enables you to prioritize problems for correction based on the portion of the customer base that may be lost. It considers frequency and damage as measured by impact on loyalty, increased risk and negative word of mouth. The market-at-risk methodology and the customer value should be validated in advance with the CFO or the resident financial cynic. Remember that CFO buy-in significantly increases the VOC impact on customer satisfaction improvement.
Package the survey results for ease of use by executives. CX survey results should be tailored to each audience and describe the top issues in no more than one to two pages. Complicated data tables that require study and analysis (e.g., top 10 complaints by top 15 products, giving the reader 150 data points to analyze) are a barrier to consumption of the results. When using data tables and graphs, proactively conduct the analysis for the reader and list the four key problems that most need attention. For maximum impact, estimate the monthly cost of inaction for each key issue and provide a suggested action plan with process metrics to measure impact.
Present data in a positive tone and with creative ideas. While we noted in the first best practice that the CX survey audience should always be prepared for constructive bad news, the survey results should strive for balance and also highlight positive accomplishments. For example, point out where previous initiatives had a positive impact or show how a process metric has improved. Blame should not be assigned to individual units but dissatisfaction and its accompanying financial opportunity can be associated with particular cross-functional processes. Communicate to the operating manager, “You are doing well but look how much more money you are leaving on the table that you would accrue if you did X.” By nature, processes are cross-functional and therefore less threatening. Also, if you add creative ideas suggested by customers and your customer service representatives, the report is repositioned as an idea source. One company’s customer service department had a section of the satisfaction tracking report titled “The Wacky Ideas Section.” The marketing, brand and product development departments viewed the section as an innovation source.
Prepare the internal audience for constructive bad news. One way to lose your audience is to unpleasantly surprise them with data that they find counterintuitive to their own experience or that is threatening. The following are critical to properly setting the audience’s expectations in advance: stress that research often produces counterintuitive surprises and will surface some unhappy customers; show that negative results highlight the causes of price sensitivity which, when identified, can be used to facilitate better margins; couple each negative result with a quantification of the upside revenue opportunity; assume that the findings will focus on process issues instead of affixing blame to a particular unit.