Editor’s note: Dave Fish is senior vice president, expert services at research firm MaritzCX, Salt Lake City, Utah.

The current emphasis on customer experience (CX) is undeniable. Estimated spend for CX activities in 2014 was $3.77 billion; this is expected to grow to $8.39 billion by 2019.

Unfortunately, all of this investment is not making the impact one would hope in improving the customer experience. The American Customer Satisfaction Index shows only modest improvement for overall satisfaction across industries from its inception in 1994 at 74.8 to its current standing of 75.6 for the same period in 2014. Likewise, a recent MaritzCX study reveals that 78 percent of global CX programs are not meeting executive expectations.

Why? Measurement alone is not the answer; the key to successful CX systems is collecting data to draw out and act on meaningful insights.

Applying the CX value chain

Researchers can apply the CX value chain with a series of steps that starts with measurement and travels all the way through the process of changing the customer experience. The value chain is fueled by three different forms of insight: tactical, operational and strategic. Each step in the chain is predicated on the successful execution of the previous link. And, while tactical insights can be largely automated, operational and strategic insights require inherent curiosity.

The chain is fairly straightforward. A customer has an experience, that experience is measured, that information is communicated as insights to stakeholders, those stakeholders commit to action and then action is taken, which comes full circle to impact the customer experience in the future.

Link 1: Brand experience

The chain begins with a customer experience – buying an e-book via a mobile device, making an inquiry on a bank account online or having a vehicle serviced at an auto dealership – but customer experience is not limited to transaction-based service encounters. It is the complete journey customers take with a brand from awareness through disposal, with many stops along the way.

Therefore, we must view the customer experience longitudinally and across channels. Consumers do not perceive online, mobile, call center and in-store as separate entities – rather, they are just different modes of communicating and interacting with brands. Customers expect and demand one coordinated voice that knows them regardless of the channel.

In all instances, customers experience a brand through a series of interactions with different touchpoints and form evaluative opinions and emotional responses that accumulate to overall feeling and opinion about the brand. While most of the industry has focused on more rational issues in their evaluation of brand, the future is in understanding the multifaceted emotional response customers have about individual interactions and the brand as a whole.

Link 2: Measurement

The industry spends an inordinate amount of time at this step, sometimes to the detriment of others. However, measurement is an important step in the process that will affect consequential steps. How can researchers effectively measure customer experience? By focusing on these elements:

  • Ask the right people. In hospitality, there will always be at least one company that only surveys business travelers, even though leisure travelers make up a high percentage of its clientele. The reason? It is difficult and expensive to obtain the sample and business travelers are often thought to be “high value” clients. This company is only seeing part of the big picture.
  • The right things. The most glaring hole in current measurement is the gravitation toward asking hard operational questions surrounding “cleanliness” and “timeliness” versus exploring more emotionally charged issues such as “honesty” and “low pressure.” Items that contain emotions are hard to take action on and harder to diagnose but much more predictive of important business outcomes than hygiene factors.
  • At the right time. Measurement can go awry when there is under-specification of the experience. This happens when companies do not understand the customer journey completely and miss important moments of truth.

Link 3: Insights

Tactical insights are those that can be immediately acted upon and are often referred to as “hot alerts” where the problem and resolution are apparent. Identifying service recovery opportunities is a prime example of tactical insights. Problem: “Bob is upset about his cell phone plan.” Solution: “Follow up with Bob to resolve his concern.” Other applications of tactical insights beyond service recovery include proactive outreach to and recognition of employees and customers.

While tactical insights are typically a one-step process (identify/apply solution to problem), operational insights are usually accompanied by an intermediate step. Operational insights can also thwart the frequency of certain tactical problems by addressing the root cause. The tactical insight of “customer did not receive floor mats” can be resolved by “deliver floor mats to customer.” The operational insight in this example would be “customers in the southwest region, district 12, during the summer months are complaining about not receiving floor mats.” This does not have an immediate and apparent next step and requires a bit of homework. Why don’t they have floor mats? Is it all customers or just some? This is where curiosity and understanding of the industry come into play. In this case, it may be that only vehicles from a certain factory are experiencing this problem and the root cause is a supply chain issue. It is an operational issue without an immediately obvious cause or solution.

With strategic insights, the problem, cause and the solution are not readily apparent. The first step in discovering strategic insights is to focus on the business problem an organization is trying to solve. Good business questions include: Why are customers defecting? Why do customers choose Brand A over other brands? Who likes our product/service the most? Keep it simple and business focused.

Once the business question is identified, apply the backward research process (BRP). Start with what you want to see when you are done and work backward to the questionnaire or database. Consider creating a ghost deck – a series of slides that you would like to display after all the work is complete – to review with stakeholders before data collection and analysis commences. This will ensure alignment on what it is you are trying to uncover.

When teams use BRP your team is better informed about what questions to ask and what data sources they might need. Oftentimes, analysts fall into a research malady known as single study syndrome where they feel compelled to use only the data they gathered in their survey to answer the question posed. It is best for you to make efficient use of multiple data sources to help tell the story.

Link 4: Commitment to action

Without commitment to action, the final resting place for the best insights, whether tactical, operational or strategic, will be as a colorful but space-consuming place on a bookshelf or hard drive. Ensure commitment upfront by asking stakeholders what they intend to do as a consequence of gathering customer feedback.

In the case of tactical insights, commitment to action is critical. Implementing a system that sets up an expectation for customers, only to have nothing done, will make the situation worse. Don’t set expectations with customers that your organization is not prepared to honor.

Involve stakeholders by forming a cross-functional steering committee to help leverage customer insights but keep things at a strategic level.

In a perfect world, everyone in the organization would be motivated to help customers because it is known that doing so will result in positive outcomes downstream. Telling this story and providing evidence can help galvanize the organization to do the right thing while showing the relationship between delivering good customer experience and positive business results can help convince skeptical retailers and operators.

Link 5: Taking action

To make the most of insights and drive action that will positively impact business, first confirm that stakeholders responsible for taking action know how to use the research data and tools. Technical training may be required.

Next, ensure action is taken. Did anyone follow up with the customer? Was the product training conducted for the salespeople who needed it? These are the types of questions that need to be answered affirmatively to positively impact the customer experience. There are ways to automate the verification of actions. In the case of follow-up, some organizations have opted to pursue a double closed-loop process, whereby the CX program validates that the customer has been followed up with and the concern has been resolved. In the case of more tangible forms of intervention, mystery shopping or crowdsourcing can confirm if the said activity was completed.

What if action was taken but no customer experience improvement was observed? This is where you need to disentangle what Mark Lipseyi calls “theory failure” from “program failure.” Researchers must determine whether the idea is a bad one that did not have the desired effect (theory failure) or if it was not executed effectively or faithfully to the original theory (program failure).

Perhaps product training was conducted but was insufficient to have a lasting effect on salespeople internalizing or implementing their new skills in real-world practice. This would be a program failure because it was the right intervention but it was suboptimally implemented.

Alternatively, an organization may discover the real reason that product knowledge was deficient in a particular region was lack of demo products to show customers. A customer evaluated product knowledge of the salesperson as low because he could not show the product to the customer and had little firsthand experience with it. And when the salesperson did show the product, he did a poor job of describing the features and benefits. In this case, the theory was wrong.

In the instance of a theory failure, the answer is to fix the problem (get demo products into the hands of the salespeople) and try again. With program failure, it is a matter of under-optimized execution. In implementing CX programs, it is important to have a measure-adjust-measure approach to see what is working and what is not. Of course, applying scientific rigor by utilizing experimental and quasi-experimental designs can help you be more certain about attribution and should be followed when feasible.

Brands serious about improving customer experience must be inherently curious and ask the right questions of the right people. Throughout the process, researchers must be clear on their objectives, know how stakeholders will use insights gleaned through measurement and CX programs and ensure a commitment to action once opportunities are identified. They should ask, listen, notice, analyze and challenge, and in doing so, uncover the insights that enable them to transform the way they do business and delight customers regardless of where or how they touch the brand.

Lipsey, M.W. (2002). Design Sensitivity Sage Publications.
ii Cook, T.D. and Campbell, D.T. (1979). Quasi Experimentation: Design and Analysis Issues for Field Setting. Houghton Mifflin.