Less may be more

Editor’s note: Vivek Bhaskaran is CEO and co-founder of Survey Analytics, an Issaquah, Wash., research firm.

Businesses today realize that one of the keys to success in the competitive marketplace is effective customer management. Companies see customer relationship as a strategic advantage and have invested a lot of effort in making sure that customer relationship management (CRM) is high on the priority list. However, few companies have invested effort in terms of having a continuous measurement strategy that can signal potential dips in real-time.

With the explosive growth of do-it-yourself research projects using online surveys, conducting customer satisfaction surveys has become more of an in-house operation. Many companies are asking themselves, How can we improve if we can’t effectively measure?

For a customer satisfaction program to be effective and accepted, it should be more than just one survey that is sent out to all your customers annually. It should be an ongoing strategy of continuous measurement and improvement based on the feedback received. Validation of the improvements can be measured directly in the forms of satisfaction indices.

This article will discuss the different features and characteristics (and possible mitigation strategies) of conducting an effective customer satisfaction program for your business.

The core tenets: satisfaction, importance and loyalty

Most customer interaction studies have a couple of core issues that we’d like to measure:

a) Satisfaction - How satisfied are your customers with respect to the various services and attributes of your engagement with them?

b) Importance - What is really important to your customers and what is not?

c) Loyalty - What do your customers think about you really and how do they perceive your services? For most businesses, customer retention directly affects their profitability.

For each of the core tenets above we’ll summarize two things:

1) effective strategies for presentation and data collection;

2) options for data-analysis and interpretation.

Satisfaction

Effective strategies for presentation

For the most part, a five-item scale (very dissatisfied-very satisfied) battery of options has a fairly low degree of cognitive stress. The five-point scale has enough options usually to accommodate the spectrum of social perception. A battery of options (matrix) is generally preferred for the following reasons:

1. It gives users a reference point. For example, asking a user to rate their satisfaction with the product purchase experience on one screen and customer service experience on another screen does not give the same frame of reference.

2. Less visual real estate is occupied - this leads to a more effective presentation.

The basic principle is to put together a list (between three and seven) of components of your service that you’d like to measure. Add in a final overall satisfaction rating also.

Options for data-analysis and interpretation

a) Mean score across all the respondents for each option. You can assume a score of 1-5. Obviously the closer the mean for each of the options is to 5 the better you are.

b) Relative mean score. Here you look at the mean of each of the options when compared to each other. This will give you a good idea about the relative satisfaction scores when the components are put against each other.

c) It is important to collect the overall satisfaction score along with the component satisfaction scores. There are a couple of reasons for this:

  • The overall satisfaction score should be close to the average satisfaction scores of the individual components. If the overall satisfaction is way out of line from the other component scores it usually means that there is some form of bias taking place, or we are missing out some component in the matrix.
  • Regression analysis can be performed on the data to give out importance scores for each of the components.

Importance

Generally, when customers are comparison shopping, they are really comparing options that are important to them. Measuring importance of the different components of your product or service is a little more challenging than measuring satisfaction. This is because importance is generally relative. Different people have widely different perceptions of importance and need. Accordingly, we simply cannot take the same approach we took with measuring satisfaction. A five-point scale (not very important-very important) is simply not going to give you the data that can be called actionable. Moreover, having another five-point scale that looks and feels very much like the previous (satisfaction) scale becomes monotonous and uninteresting. Always strive to make the survey engaging.

Effective strategies for presentation

The easiest and the most effective way of measuring importance is having a simple multiple choice question (select more than one option) - display all the components and have your users choose the top three factors that they consider important. This approach has the following advantages: users can check three out of a list of seven items; users don’t have to worry about ranking the three items they are selecting; users don’t feel overwhelmed with another battery of questions. It also has disadvantages: detailed segmentation cannot be obtained; it is not possible to determine the relative importance on a per-user level.

Options for data-analysis and interpretation

There are two parts to the data analysis that can guide us here. Basic frequency analysis as well as TURF analysis:

a) Frequency analysis

We can do a simple frequency analysis of all the respondents. The top three important issues for all the respondents should be visible immediately. The relative frequencies can give you an idea of the importance ratings for each of the options.

b) TURF analysis

TURF analysis is traditionally used to measure reach in multiple-choice questions (with users allowed to choose more than one option) - however, TURF can also be used in other contexts such as measuring importance. TURF analysis allows you to look at the data from an option reach perspective. It answers questions like: “If I address this component/option, what percent of my customers will I connect with?”

Loyalty

One of the most effective measures of loyalty is to measure the degree to which your customers will vouch for you. If your customers go out of their way to recommend your product or service to others, it’s an effective measure of their perception.

Effective strategies for presentation

Again, simplicity is the key. A single question can get you a measure of how loyal your customers are towards you. Asking your customers how likely are they to recommend your product or service to their colleagues and friends gives you a fairly good indication of how they perceive your service or product.

Options for data-analysis and interpretation

For a positive growth environment, the mean should be between 1 and 1.5. Most of your customers should feel good about recommending your services or products to others.

A great deal of research has already been done and shown that customer loyalty is intrinsically tied to the fact that people still value word of mouth.

The satisfaction index

What is a customer satisfaction index? Indices are very popular in part because of their ability to effectively and accurately represent the underlying data with a single number. The indices in absolute terms do not have much value. It is the rise (or fall) of the value of indices that actually make a difference.

Generally, indices are developed based on specific models. These models are specific to industries and are really beyond the scope of the current discussion. However, it is fair to say that indices are mathematical representations of the different components of the data that you collect.

Non-response bias

With the online survey process, long and unwieldy online surveys are becoming very popular. It is relatively easy and tempting to create long surveys so that granular data points are collected. While on one hand this gives you all the data you need to make and affect business decisions, it also introduces in important concept in online research called non-response bias. What exactly is non-response bias? Let’s say you have 200 customers, and you send out a customer satisfaction survey to all of them. You get a response rate of 20 percent. The question is, do these 40 customers speak for all your customers? How confident are you that the responses that 20 percent of your customer base is giving you can be taken and applied to most of your customers? What if only the very satisfied or the very dissatisfied customers actually took the time to complete the survey? Non-response bias is the bias or the skew in the analysis and interpretation of your data due to the fact the large percentage of your respondents did not complete the survey. While there are many effective ways for making sure your response rates are high enough, the primary factor responsible for not completing a survey is the length of the survey. Keep your surveys short and simple.

Less is more

The question becomes, how do you balance out your analytical and business requirements and still keep your surveys interesting and short to keep your response rates high? Obviously, it’s a balancing game here. However, there are certain steps that can help with this issue:

a) Design the survey to collect data that is actionable. Do you really need to survey a large segment of your market? Or would a shorter survey on a smaller scale get you the answers you need?

b. Always allow users to enter comments in open-ended text. Monitor this while in testing mode. This can give you some very quick insight into what users think about your survey in general.

b) Anecdotal vs. tracking surveys. Think about the methodology you are going to use to collect data. Are you going to do a one-time survey or is the data going to be collected over a large period of time for continuous tracking? Continuous tracking surveys can be short and still accomplish the analysis objectives. Single (one-time) surveys usually need to collect more data. Can your business and data collection objectives be prioritized and multiple surveys sent out over the course of the customer life cycle to collect data?

- Send pre-sales surveys to potential customers.

- Send post-sales surveys to newly acquired customers.

- Send regular satisfaction surveys to ongoing customers.

- Send exit/close-out surveys to customers who are walking away.

c) Keep cognitive stress to a minimum. What is cognitive stress? Have you ever filled out a survey/form where you were asked to distribute 100 points over a set of say five items? It’s not rocket science but it frustrates a lot of people. This frustration will directly affect the response rate.

Executive visibility

The success of a customer satisfaction program in a large part will depend upon how comprehensively and cohesively can the data be presented to the business decision makers. One of the challenges that in-house (as well as external research) projects face constantly is that if the data is significantly different from what the business decision makers think, it is often dismissed as anecdotal or a one-time phenomenon. To mitigate this issue, explore running a program that is continuous so real-time feedback can be provided to the executive management on demand. Solutions like customer satisfaction dashboards come in very handy for such buy-in and also builds confidence in the research solution.

Manage the requirements

The key to success for any customer satisfaction research study is dependent upon how well you manage the conflicting data analysis requirements with the need for simplicity. Customer satisfaction studies need not be all-encompassing. They can be short and give you the necessary data points needed for to make informed business decisions. You can leverage technology to segment out populations that warrant further research (very unsatisfied users, etc.) and try to delve into the reasons for their customer experience. Remember, you can never improve what you cannot measure effectively.