Editor’s note: Bryan Orme is vice president of Sawtooth Software, Inc., Sequim, Wash.

The field of marketing research has rarely been the genesis for new statistical models. We’ve mainly borrowed from other fields. Conjoint analysis and the more recent discrete choice (choice-based conjoint) are no exception, and were developed based on work in the ’60s by mathematical psychologists Luce and Tukey, and in the ’70s by McFadden (2000 Nobel Prize winner in economics).

Marketers sometimes have thought (or been taught) that the word “conjoint” refers to respondents evaluating features of products or services CONsidered JOINTly. In reality, the adjective conjoint derives from the verb conjoin, meaning “to join together.” The key nature of conjoint analysis is that respondents evaluate product profiles composed of multiple conjoined elements (attributes or features). Based on how respondents evaluate the combined elements (the product concepts), we deduce the preference scores that they might have assigned to individual components of the product that would have resulted in those overall evaluations. Essentially, it is a “back-door” approach (decompositional) to estimating people’s preferences for features rather than an explicit (compositional) approach of simply asking respondents to rate the various components. The fundamental premise is that people cannot reliably express how they weight separate features of the product, but we can tease this information out using the more realistic approach of asking for evaluations of product concepts through conjoint analysis.

Let’s not deceive ourselves. Human decision-making and the formation of preferences is complex, capricious and ephemeral. Traditional conjoint analysis makes some heroic assumptions, including the proposition that the value of a product is equal to the sum of the value of its parts (i.e., simple additivity), and that complex decision-making can be explained using a limited number of dimensions. Despite the leaps of faith, conjoint analysis tends to work well in practice, and gives managers, engineers and marketers great insight to reduce uncertainty when facing important decisions. Conjoint analysis isn’t perfect, but we don’t need it to be. With all its assumptions and imperfections, it still trumps other methods.

Early conjoint analysis (1960s and 1970s)

Just prior to 1970, marketing professor Paul Green recognized that Luce and Tukey’s 1964 article on conjoint measurement (published in a non-marketing journal) might be applied to marketing problems to understand how buyers made complex purchase decisions, to estimate preferences and importances for product features, and to predict buyer behavior. Green couldn’t have envisioned the profound impact his work on full-profile “card-sort” conjoint analysis would eventually achieve when he and co-author Rao published their historic 1971 article, “Conjoint Measurement for Quantifying Judgmental Data” in the Journal of Marketing Research (JMR).

With early full-profile conjoint analysis, researchers carefully constructed (based on published catalogs of orthogonal design plans) a deck of conjoint “cards.” Each card described a product profile, such as shown in Exhibit 1 for automobiles

Respondents evaluated each of perhaps 18 separate cards, and sorted them in order from best to worst. Based on the observed orderings, researchers could statistically deduce for each individual which attributes were most important, and which levels were most preferred. The card-sort approach seemed to work quite well, as long as the number of attributes studied didn’t exceed about six. And, researchers soon found that slightly better data could be obtained by asking respondents to rate each card (say, on a 10-point scale of desirability) and using ordinary least squares (regression) analysis to derive the respondent preferences. In the mid-1970s, Green and Wind published an article in the Harvard Business Review on measuring consumer judgments for carpet cleaners, and business leaders soon took notice of this new method.

Also just prior to 1970, a practitioner named Rich Johnson at Market Facts was working independently to solve a difficult client problem involving a durable goods product and trade-offs among 28 separate product features, each having about five different realizations (levels). The problem was much more complex than those being solved by Green and co-authors with full-profile card-sort conjoint analysis, and Johnson invented a clever method of pairwise trade-offs using “trade-off matrices,” which he published in JMR in 1974. Rather than asking respondents to evaluate all attributes at the same time (in “full profile”), Johnson broke the problem down into focused trade-offs involving just two attributes at a time. Respondents were asked to rank-order the cells within each table, in terms of preference, for the conjoined levels (Exhibit 2).

Respondents completed a number of these pairwise tables, covering all attributes in the study (but not all possible combinations of attributes). By observing the rank-ordered judgments across the trade-off matrices, Johnson was able to estimate a set of preference scores and attribute importances across the entire list of attributes, again for each individual.

Conjoint analysis in the 1980s

By the early 1980s, conjoint analysis was spreading (at least among researchers and academics possessing statistical knowledge and computer programming skills). Another influential case study had been published by Green and Wind regarding a successful application of conjoint analysis to help Marriott design its new Courtyard hotels. When commercial software became available in 1985, the floodgates were opened. Based on Green’s work with full-profile conjoint analysis, Steve Herman and Bretton-Clark software released a software system for the IBM standard.

Also in 1985, Johnson and his new company, Sawtooth Software, released a software system (also for the IBM standard) called ACA (adaptive conjoint analysis). Over many years of working with trade-off matrices, Johnson had discovered that respondents had difficulty dealing with the numerous tables and in providing realistic answers. He discovered that he could program a computer to administer the survey and collect the data. The computer could adapt the survey to each individual in real time, asking only the most relevant trade-offs in an abbreviated, more user-friendly way that encouraged more realistic responses. Respondents seemed to enjoy taking computer surveys, and they often commented that taking an ACA survey was like “playing a game of chess with the computer.”

One of the most exciting aspects of these commercial conjoint analysis programs (traditional full-profile conjoint or ACA) was the inclusion of “what-if” market simulators. Once the preferences of typically hundreds of respondents for an array of product features and levels had been captured, researchers or business managers could test the market acceptance of competitive products in a simulated competitive environment. One simply scored the various product offerings for each individual by summing the preference scores associated with each product alternative. Respondents were projected to “choose” the alternative with the highest preference score. The results reflected the percent of respondents in the sample that preferred each product alternative, termed “share of preference.” Managers could make any number of slight modifications to their products and immediately test the likely market response by pressing a button. Under the proper conditions, these shares of preference were fairly predictive of actual market shares. The market simulator took esoteric preference scores (part worth utilities) and converted them into something much more meaningful and actionable for managers (product shares).

Conjoint analysis quickly became the most broadly-used and powerful survey-based technique for measuring and predicting consumer preference. But the mainstreaming of conjoint analysis wasn’t without its critics, who argued that making conjoint analysis available to the masses through user-friendly software was akin to “giving dynamite to babies.”

Those who experienced conjoint analysis in the late 1980s are familiar with the often acrimonious debates that ensued between two polarized camps: those advocating full-profile conjoint analysis and those in favor of ACA. In hindsight, the controversy had both positive and negative consequences. It certainly inspired research into the different merits of the approaches. But it also dampened some of the enthusiasm and probably was a drag on accelerating use of the technique, as some researchers and business managers alike paused to assess the fallout.

Even prior to the release of the first two commercial conjoint analysis systems, Jordan Louviere and colleagues were adapting the idea of choice analysis among available alternatives and multinomial logit to, among other things, transportation and marketing problems. The groundwork for modeling choice among multiple alternatives had been laid by McFadden in the early 1970s. The concept of choice analysis was attractive: buyers didn’t rank or rate a series of products prior to purchase, they simply observed a set of available alternatives (again described on conjoined features) and made a choice. A representative discrete choice question involving automobiles is shown in Exhibit 3.

Discrete choice analysis seemed more realistic, natural for respondents, and offered powerful benefits, such as the ability to better model interaction terms (i.e., brand-specific demand curves), cross-effects (i.e., availability effects and cross-elasticities), and the flexibility to incorporate alternative-specific attributes and multiple constant alternatives. But the benefits came at considerable cost: discrete choice questions were an inefficient way to ask respondents questions. Respondents needed to read quite a bit of information before making a choice, and a choice only indicated which alternative was preferred rather than strength of preference. As a result, there wasn’t enough information to separately model each respondent’s preferences. Rather, aggregate (summary) models of preference were developed across groups of respondents, and these were subject to various problems such as IIA (commonly known as the “red bus/blue bus” problem) and ignorance of the separate preference functions for latent subgroups. Overcoming the problems of aggregation required building ever more complex models to account for availability and cross-effects (“mother logit” models), and most conjoint researchers either didn’t have the desire, stomach or ability to build them - not to mention that no easy-to-use commercial software existed for start-to-finish discrete choice analysis. As a result, discrete choice analysis was used by a relatively small and elite group throughout the 1980s.

Conjoint analysis in the 1990s

Whereas the 1980s was characterized by a polarization of conjoint analysts into ideological camps, researchers in the 1990s largely came to recognize that no one conjoint method was the best approach for every problem, and expanded their repertoire. Sawtooth Software influenced and facilitated this movement by publishing research (much of it forwarded by its users at the Sawtooth Software Conference) demonstrating under what conditions different conjoint methods performed best, and then by developing additional commercial software systems for full-profile conjoint analysis and discrete choice.

Based on industry usage studies conducted by leading academics, ACA was the most widely used conjoint technique and software system worldwide. By the end of the decade, ACA would yield that position to the surging discrete choice analysis. Two main factors are responsible for discrete choice analysis overtaking ACA and other ratings-based conjoint methods by the turn of the century:

1) The release of commercial software for discrete choice (CBC or choice-based conjoint) by Sawtooth Software in 1993.

2) The application of hierarchical Bayes (HB) methods to estimate individual-level models from discrete choice (principally due to articles and tutorials led by Allenby of Ohio State University).

Discrete choice experiments are typically more difficult to design and analyze than traditional full-profile conjoint or ACA. Commercial software made it much easier to design and field studies, while HB made the analysis of choice data seem nearly as straightforward and familiar as for ratings-based conjoint. With individual-level models under HB, the IIA issues and other problems due to aggregation were controlled or entirely solved. This has helped immensely with CBC studies, especially for those designed to investigate the incremental value of line extensions or “me-too” imitation products. While HB transformed the way discrete choice studies were analyzed, it also provided incremental benefits in accuracy for traditional ratings-based conjoint methods that had always been analyzed at the individual level.

Other important developments during the 1990s included:

  • latent class models for segmenting respondents into relatively homogeneous groups, based on preferences;
  • Web-based data collection for all main flavors of conjoint/choice analysis;
  • improvements in computer technology for rendering and presenting graphics;
  • dramatic increases in computing speed and memory made techniques such as HB feasible for common data sets;
  • greater understanding of efficient conjoint and choice designs: level balance, level overlap, orthogonality, and utility balance;
  • SAS routines developed by Kuhfeld, especially for design of discrete choice plans using computerized searches;
  • advances in the power and ease of use of market simulators (due to commercial software developers, or consultants building simulators within common spreadsheet applications).

The 1990s represented a decade of strong growth for conjoint analysis and its application in a fascinating variety of areas. Conjoint analysis had traditionally been applied to fast-moving consumer goods, technology products and electronics, durables (especially automotive), and a variety of service-based products (such as cell phones, credit cards, banking services). Some other interesting areas of growth for conjoint analysis included design of Web sites, litigation and damages assessment, human resources and employee research, and Web-based sales agents for helping buyers search and make decisions about complex products and services.

Analysts had become so trusting of the technique that the author became aware of some who used conjoint analysis to help them personally decide among cars to buy or even members of the opposite sex to date!

Year 2000 and beyond

Much of the recent research and development in conjoint analysis has focused on doing more with less: stretching the research dollar using IT-based initiatives, reducing the number of questions required of any one respondent with more efficient design plans and HB (“data borrowing”) estimation, and reducing the complexity of conjoint questions using partial-profile designs.

Researchers have recently gone to great lengths to make conjoint analysis interviews more closely mimic reality: using animated 3D renditions of product concepts rather than static 2D graphics or pure text descriptions, and designing virtual shopping environments with realistic store aisles and shelves. In some cases the added expense of virtual reality has paid off in better data, in other cases it has not.

Since 2000, academics have been using HB-related methods to develop more complex models of consumer preference: relaxing the assumptions of additivity by incorporating non-compensatory effects, incorporating other descriptive and motivational variables, modeling the interlinking web of multiple influencers and decision-makers, and linking survey-based discrete choice data with sales data, to name just a few. Additional efforts toward real-time (adaptive) customization of discrete choice designs to reduce the length of surveys and increase the precision of estimates have been published or are underway.

Software developers are continuing to make it easier, faster, more flexible and less expensive to carry out conjoint analysis projects. These software systems often support multiple interviewing formats, including paper-based, PC-based, Web-based and handheld device interviewing. Developers keep a watchful eye on the academic world for new ideas and methods that gain traction and are shown to be reliable and useful in practice.

Commercially-available market simulators are becoming more actionable as they incorporate price and cost information, leading to market simulations based on revenues and profitability rather than just “shares of preference.” To reduce the amount of manual effort involved in specifying successive market simulations to find optimal products, automated search routines are now available. These find optimal or near-optimal solutions when dealing with millions of possible product configurations and dozens of competitors - usually within seconds or minutes. This has expanded opportunities for academics in game theory who can study the evolution of markets as they achieve equilibrium, given a series of optimization moves by dueling competitors.

Importantly, more people are becoming proficient in conjoint analysis as the trade is being taught to new analysts, as academics are including more units on conjoint analysis in business school curricula, as a growing number of seminars and conferences are promoting conjoint training and best practices, and as research is being published and shared more readily over the Internet.

Continues to evolve

Yes, conjoint analysis is 30-plus years old. But rather than stagnating in middle age, it continues to evolve - transformed by new technology and methodologies, infused by new intellectual talent, and championed by business leaders. It is very much in the robust growth stage of its life cycle. In retrospect, very few would disagree that conjoint analysis represents one of the great success stories in quantitative marketing research.