Editor's note: Keith Chrzan is SVP analytics at Sawtooth Software. He can be reached at email@example.com.
Customer experience (CX) research seeks to answer two important questions: How are we performing? How can we best improve performance? Answering the first question involves having respondents provide some overall evaluation of their experience like customer satisfaction (Westbrook 1980) or advocacy (Reichheld 2003). While both measures (and others) have their proponents, no one doubts the need for some sort of report card for the overall rating of the product or service.
As a way of answering the second question, importance ratings may come to mind. Widely used in the marketing research industry, importance ratings are easy to ask and answer and they require very little questionnaire real estate – for example, in one study respondents rated the importance of 10 items in only 38 seconds – so just under 4 seconds per attribute rated (Chrzan and Golovashkina 2006). Unfortunately, importance ratings perform poorly in terms of their validity. Even more unfortunately, many marketing researchers remain unaware of just how bad importance ratings are. I find this surprising, because a trio of papers published 40 years ago in the industry’s premier academic journal, the Journal of Marketing Research, found that importance ratings not only have no predictive validity, they in fact have negative predictive validity when used to model preferences (Bass and Wilkie 1973; Beckwith and Lehmann 1973; Wilkie and Pessemier 1973). To bring this to marketing researchers’ attention, Chrzan and Golovashkina (2006) replicated the earlier finding, again reporting that using importance ratings to weight attributes reduces the predictive validity of the resulting preference models.
To be fair, the CX industry largely moved away from asking stated importances a long time ago. Finding that they could be even more...