Integrating explicit and implicit approaches

Editor’s note: Adam DiPaula is president of, and Barb Justason is vice president of, CGT Research International, Vancouver, B.C.

Marketing researchers are under increasing pressure to provide practical, actionable direction in their reporting. Our clients, whether internal or external, want us to tie our work directly to revenues, staff efficiency and competitive positioning. In any other consulting profession, we might not hesitate to provide our clients with such direction based on our own experience and anecdotal evidence. And, to be fair, many of us could quite effectively provide our clients with exceptional advice to direct their decision-making. But we’re marketing researchers. We thrive on evidence.

Perhaps more than any other category, clients of our customer satisfaction products are challenging us the most to provide direction that will deliver measurable results. With the assumption that a delighted customer population will reward the organization with repeat purchases and recommendations to others, these organizations want to know where they need to direct their resources to better meet the needs and desires of their customers.

As straightforward as it may seem, identifying what customers want has never been easy. Today, there is what might be best described as a frenzied obsession with identifying customer needs and desires. Organizations are throwing themselves at the mercy of their customers in their attempt to identify the elusive keys to customer retention. It is now very rare for a customer to purchase a product or service without being confronted by some request to tell the organization what they want. “We’re listening,” “We need to hear from you,” and “How are we doing?” are all common slogans aimed at unlocking the secrets of the customer.

The marketing researcher has complemented these organizational mantras with techniques designed to both systematically evaluate what customers want and to diagnose what should be done to serve customers better. These techniques have involved either some type of explicit approach to understanding what customers want (e.g., asking customers to indicate what is important) or some type of implicit measure (e.g., deriving what is important to customers from other measures). Below, we briefly review some of these techniques and their evolution, and highlight how they evaluate what is important to customers, and how they diagnose what should be done to improve customer service. We then propose a way of integrating these techniques to provide a more comprehensive approach to understanding what customers want.

Rating satisfaction with researcher-generated attributes
One approach to determine what is important to customers and how to address it involves asking customers to rate their satisfaction with various attributes of a particular service or product. The attributes rated are developed by the researcher, with the assumption that the researcher can correctly identify those attributes that are important to customers (without actually talking to customers directly). Diagnosing the appropriate focus involves simply identifying those attributes on which satisfaction ratings are poor.

  • Identifying what is important: decided on by the researcher.
  • Diagnosing how to improve: address attributes that are rated poorly.

Rating satisfaction with customer-generated attributes

As researchers realized that they did not necessarily know what aspects of a product or service were most important to customers, they began to solicit feedback directly from their customers - typically in a qualitative format - to identify important attributes. These attributes then formed the basis of customer evaluation. Again, the attributes that were rated poorly were the attributes that researchers recommended focusing on.

  • Identifying what is important: obtained through qualitative customer feedback.
  • Diagnosing how to improve: address attributes that are rated poorly.

Stated importance/satisfaction matrices

More recently, the focus has shifted toward developing more targeted approaches that prioritize service improvement initiatives, commonly by assessing the relative importance of attributes. Researchers ask customers to rate the importance of attributes and to rate their level of satisfaction with attributes. A four-quadrant matrix is created by crossing the two dimensions of importance and satisfaction (see Figure 1).

Figure 1

Attributes are categorized based on customer ratings on both importance and satisfaction. The researcher can identify attributes that are rated particularly important by customers and that receive relatively low performance ratings from customers - that is, attributes that customers state they value but that the organization is not delivering on (i.e., threats). These attributes can be targeted first, before those underperforming attributes that are rated as less important by customers (vulnerabilities). A key assumption of this approach is that customers can explicitly and reliably identify all of the attributes that will influence their impressions and behavior.

  • Identifying what is important: Customers directly rate the importance of attributes.
  • Diagnosing how to improve: Attributes are prioritized and targeted based on a combination of stated importance and satisfaction ratings.

Derived importance/satisfaction matrices
One of the shortcomings of the approach above is the often minimal variation in importance ratings across attributes - sometimes called the halo effect. Things can get fuzzy because customers tend to rate all attributes highly important. Further, stated importance measures are also subject to socially desirable responding - customers may say an attribute is important to them because there is widespread endorsement of that attribute (e.g., using products made of recycled materials), but the attribute may in fact not be truly a factor in customer choices.

So researchers have turned to measures of derived importance. Derived importance measures are calculated by correlating satisfaction ratings on individual attributes with a measure of overall satisfaction. The resulting correlation coefficient is an implicit measure of the influence of the attribute on perceptions because customers are not asked directly what is important to them.

With increasing frequency, derived measures of importance are replacing stated importance measures, and are being used to create the same type of matrix that is shown in Figure 1 (see Figure 2). The assumption is that in obviating the halo effect, derived importance measures essentially produce the same as or better diagnostic information than do stated measures. And the budget advantage is obvious: The survey instrument will be shorter by an entire battery of questions.

Figure 2

Identifying what is important: Importance is derived from the correlation between attribute satisfaction and overall satisfaction.

Diagnosing how to improve: Attributes are prioritized and targeted based on a combination of derived importance and satisfaction ratings.

An integrated approach

But perhaps we’ve been too quick in jumping on the derived importance bandwagon. In many instances, it is a mistake to assume that derived measures of importance will produce the same or better diagnostic information. In fact, we argue that a deeper understanding of what drives customers can be gained by examining the interaction between stated and derived measures.

We first began to understand the value of examining attributes as a function of both stated and derived importance in our customer research in the public transportation sector. We had customers rate how important particular attributes were in their decision to use public transportation (stated importance), rate how they perceived the system to be performing on each attribute, and rate their perceived performance of the system overall. These latter two ratings (performance on each attribute and overall performance) create a measure of derived importance for each attribute.

We found evidence that, rather than producing redundant measures of importance, these measures could be viewed as independent dimensions. Some attributes were high in stated importance but low in derived importance. Some were low in stated importance but high in derived importance. Some were high on both measures; some were low on both measures.

Figure 3

In the matrix in Figure 3, we crossed the dimensions of stated and derived importance to create four quadrants. We’ve labeled each quadrant and show examples of the types of attributes that fell into each quadrant in our analysis.

  • Criticals. Criticals are attributes that are viewed as essential to customers for their continued use of a service or product and have a strong impact on perceptions of service or product quality. In our analysis, service frequency and service reliability emerged as criticals.
  • Cost-of-entry. These are attributes that customers view as essential for the service to have in order for them to use it (they are probably often expected attributes). These attributes are looked upon as part of the cost of entering the market. Performance on these attributes, however, is not strongly tied to overall perceptions of the service or product. Feeling safe using transit services emerged as a cost-of-entry attribute in our analysis, likely because customers generally expect this attribute and it therefore does not figure substantially in the customer’s overall assessment of the service.
  • Implicit. Implicit attributes are those attributes that customers, when explicitly asked, indicate are relatively less important to them but nonetheless emerge as strong drivers of overall performance perceptions. Crowding emerged as such an attribute in our analysis. We believe that implicit attributes emerge because customers are not always aware at a conscious level what drives their perceptions or behavior. Customers make assumptions about what attributes are important to them, but this may not always correspond to what attributes actually affect their perceptions most strongly.
  • Peripherals. Peripherals are attributes that are not rated by customers as highly important in their decision to use a product or service, and they have a relatively weak influence on overall perceptions of service product quality. Having clean, graffiti-free transit vehicles emerged as a peripheral attribute in our analysis.

We see our approach as painting a more complete picture of what is important to customers. Relying solely on stated measures to identify what is important to customers would mask the implicit attributes that drive customer perceptions and behavior (e.g., crowding). Relying solely on derived measures would minimize the importance of cost-of-entry attributes that clearly need to be present in order for a product or service to be used (e.g., safety).

Figure 4

While we developed our approach by analyzing data in the public transportation sector, it can be adapted to other sectors and industries. For example, in the insurance industry, we might find that attributes like speed or timeliness of the claims process emerge as criticals; attributes referencing personal characteristics of staff (e.g., friendliness, warmth) emerge as implicit; attributes related to payment options emerge as peripherals; and attributes related to basic aspects of the service transaction (e.g., correct calculation of relevant discounts) emerge as cost-of-entry attributes.

The next step: customer diagnostics in 3-D

We can add the diagnostic component to our approach by adding a third dimension to the matrix which incorporates the average performance rating for each attribute. We illustrate this in Figure 4 using hypothetical ratings for a set of attributes. In interpreting the matrix, note that the larger the symbol for the attribute, the lower the average performance rating is for that attribute. Hence, the larger symbols that appear in the criticals category must be addressed to improve overall satisfaction. These are issues of high stated importance and high derived importance on which an organization is performing poorly.

This approach allows the organization to prioritize efforts to improve service or product attributes with the knowledge of both what customers tell us is important and what drives their overall perceptions of the effectiveness of a particular product or service.

Toward a comprehensive understanding of what customers value

One of the things that we hope our approach highlights is how explicit and implicit methods of measuring what customers value complement each other. Explicit inquiry - asking customers directly what is important - is necessary to fully understand what customers see as the essential factors guiding their own behavior and decision-making.

However, as noted earlier, customer feedback elicited through explicit approaches is affected by factors that may not identify all of the key determinants of what customers value. These factors include, among others: assumptions regarding what should be important (e.g., socially desirable attributes), things that are particularly accessible in the minds of customers at the time of the inquiry but may not be of enduring concern, and the need for “sense making” on the part of the customer (the need to explain wants and desires in a coherent, internally consistent manner). The latter may inhibit customers from explicitly endorsing contradictory attributes; for example, placing high values both on fast service and on customer service agents taking the time to explain things fully in a warm and friendly manner. Making the assumption that much of consumer behavior (and human behavior, for that matter) can only be fully understood by understanding the seemingly contradictory motivations that give rise to it, we need a way to bypass the explicit plane of inquiry and move to the implicit.

Implicit measures of customer value are necessary because we cannot (nor should we expect to) rely solely on the customer to tell us what is important to them. Can you accurately articulate everything that drives your own choices in your daily life - your choice of mate, your brand of beer, the car you drive? Probably not. For example, a strong determinant of the extent to which someone will have a positive image of a brand is the extent to which they have been repeatedly exposed to that brand image. However, if you were asked why you have a strong view of a brand, you would likely not count “I’ve seen it a lot” among your reasons. Nor, if asked why you chose your mate, would you rank repeated exposure among the primary reasons driving your mate choice (although perhaps, in that case, it depends on the nature of the exposure).

Implicit measures of customer value, such as derived importance correlations, complement explicit measures by illustrating the values that customers cannot necessarily directly articulate, but nonetheless form a large part of the reason why they use a company’s products and services.