Editor's note: Kevin Gray is president of Cannon Gray LLC, a marketing science and analytics consultancy. This article appeared in the February 14, 2014, edition of Quirk's e-newsletter. 

 

Key driver analysis has many applications and comes in a variety of shapes and sizes. It is a component of several proprietary methodologies developed by marketing research agencies but, more typically, the term refers to a customized solution tailored to a specific set of client needs. Broadly speaking, we use key driver analysis to uncover what is most important to consumers in a product or service category and it is a vital part of new product development, customer satisfaction, loyalty and retention and new customer acquisition strategies. In short, it helps clients better understand where to focus their priorities.

  

There are many ways to carry out key driver analysis - some quite complex - but the results are often presented very simply (e.g., quadrant mapping). This article will outline the basics of key driver analysis and offers some tips for best practices.

  

Stated vs. derived importance

  

Why not just ask consumers what's important to them? For example, we can show respondents a list of product attributes and ask how important each is to them on a five-point scale, ranging from very important to not at all important. While this easy to administer, unfortunately, respondents often tell us that just about everything is important! Consequently, their ratings may provide little discrimination among product features or service elements, even when they are substantively very different. Thus, this type of feedback is not helpful to clients in establishing priorities.

  

Note: This generalization may not apply equally to all products or services. Were we to ask radiologists to rate the importance of features of imaging equipment, for instance, we'd most likely obtain better-differentiated responses because of respondents' higher category involvement than if we had asked about carbonated beverages instead.

  

Simple importance ratings are just one way to obtain importance directly. There are also the Kano model, magnitude estimation and constant sum, in which respondents allocate points among various features to indicate the relative importance of each. More recently, trade-off methods such as max-diff are enjoying greater use, though they are more complicated because they require experimental designs and statistical modeling. By and large, alternatives to standard rating scales will tend to provide better discrimination but at the cost of complexity and respondent fatigue.

  

With derived importance, on the other hand, respondents rate a brand or company on overall satisfaction, overall liking or overall purchase interest, in addition to image or satisfaction ratings with respect to a set of attributes. Importance of each attribute is not asked directly and is instead derived by statistical analysis that relates the attribute ratings to the target variable (e.g., overall liking). In this article, this general approach is what I mean by "key driver analysis."

  

Definitions can be tricky. Statistical modeling (e.g., hierarchical Bayes) is often used with max-diff and other trade-off methods. Therefore, they could also be classified as derived importance techniques, though most researchers probably would not describe them as key driver analyses. This may mainly be force of habit. Either way, trade-off methods are quite useful in their own right but I feel best treated as a separate topic. I won't mention them further in this article other than to reiterate that they are another option.


Bivariate vs. multivariate methods

  

Setting definitional minutia aside, in key driver analysis, as I am describing it here, there are various ways to derive importance statistically. Pearson product-moment correlation coefficients and Jaccard similarity coefficients are two simple, bivariate means of accomplishing this. Each attribute is paired with the target (dependent) variable and one-by-one, the strength of association in each pair is calculated. The stronger the association between an attribute rating and the target variable, the more important we assume the attribute is in driving liking. Though correlation does not necessarily imply causation, in key driver analysis we are at least implicitly making that assumption.

  

Bivariate methods are simple and easy to compute but have a couple of important drawbacks. One is that the attributes are considered one at a time and not in the context of the other attributes. That is, we do not account for the possible influence of other attributes on liking (in this example). Another shortcoming is that bivariate approaches sometimes provide little discrimination among the attributes; the coefficients may be of similar magnitude and of little use for making decisions.

  

Multivariate analysis (MVA), in which the effects of the independent variables are estimated jointly, is generally a better way to derive importance. However, MVA is prone to misuse. It should not be undertaken casually or as an afterthought. An example of where it can go wrong is when multiple regression is employed to derive importance and the attribute ratings are strongly correlated, as is frequently the case. Put simply, when independent variables (e.g., attribute ratings) are highly intercorrelated, results will be suspect because it is difficult mathematically to pry apart the independent variables and isolate the relationship of each with the dependent variable (e.g., liking). This condition - highly intercorrelated independent variables - is known as multicollinearity and how to deal with it remains an active area of research among academic statisticians and computer scientists.

  

Ridge regression, Shapley value regression, stepwise regression and the Lasso are a few of the many methods used when multicollinarity is a concern. While each has its pros and cons, forward stepwise regression in particular has been criticized for obtaining results that are unstable and has fallen out of favor among many statisticians. Principal components regression, structural equation modeling (SEM) and partial least squares regression are three other approaches. Although distinct from one another, each employs composite variables called components, factors or latent variables in place of the original independent variables. SEM refers to a large family of related methods that, collectively, provide great flexibility in modeling.

  

The associations between the attributes and the target variable may be not be straightline relationships. In these circumstances, other methods (i.e., those employing regression splines) may be appropriate. Neural networks and several other methods popular in data mining and predictive analytics are other options in these cases and for key driver analysis in general. There are a very large number of these techniques not covered in standard statistics courses we can draw upon (see: The Elements of Statistical Learning by Hastie, Tibshirani and Friedman).

  

Many methods assume the target variable is numeric but key driver analysis is also conducted when the dependent variable is categorical, top two-box vs. bottom three-box purchase interest scores or user/non-user, for instance. Discriminant analysis and logistic regression, as well as data mining and predictive analytics, are among a host of tools appropriate in these settings.

  

The number of multivariate methods that can be applied in key driver analysis may seem daunting! No single approach is a panacea. However, the specifics of the circumstances and the amount and types of data can narrow down the range of choices considerably. Another useful guideline relates to the business objectives: Does a method only predict or can it also help you understand in an intuitive and actionable way why certain drivers are more or less important to consumers? Methods such as SEM are often useful in this regard when used wisely. Something else to consider is the level of research expertise in your (or your client's) organization. The "best" tool may not be best because even a layman's description of it may go over the heads of decision makers and management may also be suspicious of black-box techniques.

  

Tips for best practice

  

On occasion, researchers will ask importance ratings directly and also derive importance through correlation analysis or some other means. The results are then compared and variables that emerge as most important (or least important) in both approaches are deemed truly important (or unimportant). While I will not assert that this is never good practice, the underlying rationale seems to be along the lines of "We don't trust either, so let's use both." This does not make sense to me and I personally have not found this costly, dual approach helpful. It also increases respondent burden, since attributes must be shown to respondents twice.

 

Key driver analysis can fail because the attribute statements or items rated do not have the same meaning to consumers that they do to marketers. Long questionnaires that burn out respondents are another recipe for failure and placing key questions towards the end of the interview an especially bad practice. Surveys are just one source of data and customer records and other databases that haven't been properly cleaned - or whose definitions are misunderstood by the analyst - can invalidate analysis and cause more work.

 

Different statistical methods, or even the same methods conducted with different options, may provide very different perspectives on priorities and it is important that this be recognized from the outset. Rarely is it possible to point to a single number in our computer output and conclude that one result is the answer. Done sensibly, it can be helpful to generate an ensemble of results from disparate techniques and use the ensemble averages as the basis for approximating importance. Whatever the approach, experience and domain expertise are crucial and if the results will not be useful to the client, they should not be considered useful by analysts.

 

The key driver methods I've covered all pertain to cross-sectional data. They are not well-suited to time series analysis (see my article "Time series analysis: what it is and what it does" in Quirk's September 23, 2013 e-newsletter).

 

I have also assumed one size fits all (i.e., that the key drivers are the same for everyone). This may not be the case. While we can repeat the analysis separately among predefined consumer groups, this scheme has downsides (see my article "Think you know segmentation? Think again! A close look at 4 core analyses" in Quirk's December 9, 2013, e-newsletter).

 

A complex topic

 

This very brief overview of key driver analysis is simply my perspective on a complex topic but one that is very important to marketers. The discussion has been a bit lopsided in that the examples have emphasized traditional, survey-based marketing research but the core principles and techniques are applicable to practically any kind of data.

 

In closing, perhaps the most helpful advice I can offer is to think hard from the beginning about what you are trying to achieve and about your end-users and their real needs.