Editor’s note: John V. Colias is vice president and director of the Advanced Analytics Group at Arlington,Texas research firm Decision Analyst Inc.

Over the past two decades, the marketing research industry has witnessed an explosion of choice models based on survey data to quantify the value of product attributes and to predict market outcomes.

Choice approaches measure the value of products to customers by quantifying the unique contributions of attributes that combine to define the product. This brief article aims to help researchers understand the benefits of several technical advances in choice analysis. Our discussion will address the following advances: improved experimental design algorithms; segment- or customer-level models; model calibration.

Advances in these areas have provided significant benefits:

  • increased realism of market scenarios presented in survey choice tasks;
  • more reliable survey responses;
  • easier survey tasks without reducing the scope of deliverables;
  • respondent-level models that enable targeting of customers willing to pay more for specific product features;
  • testing of more complete and complex models of purchase decision-making;
  • greater accuracy of market share and revenue estimates.

First, let’s set the stage by describing the kinds of research questions addressed by choice models. From this backdrop, we will understand how the benefits of new methods are realized.

Key marketing questions in choice modeling studies

The fundamental question addressed by choice models (sometimes called choice-based conjoint) is: “How do changes in brand, price and product features impact market share?”

Choice modeling is a good approach to determine how a product or service should be priced to maximize revenue or profits or grow market share. While choice modeling can be developed from actual market choices, marketing researchers focus on survey-based choice modeling, where customers decide which and how many products to buy in hypothetical market scenarios presented. Compared to the actual marketplace, surveys provide a place where new price levels and new product features can be tested with ease, enabling the researcher to predict the impact of many possible price and feature changes on market outcomes.

Choice modeling enables researchers to determine what combination of product features or what bundles of products in a product line would most improve revenue or profit. Research findings help firms develop new products, fine-tune promotional strategy and develop service packages that appeal to large groups of customers.

For a new brand or product entering a market, choice models determine a brand or product’s projected acquisition of market share and revenue and its impact on market share and revenue of existing brands or products (i.e., incremental and cannibalized share and revenue).

Improved experimental design algorithms

Experimental designs select combinations of attributes and levels for each alternative in a market scenario. The combinations are selected to ensure that the relative value to customers of each part of a brand or product (e.g., price, size and packaging) can be measured with maximized reliability.

Improved experimental design software has enabled researchers to produce more realistic scenarios to test in survey choice tasks. For example, suppose one wireless communications provider offers multiple service plans. It would be unrealistic for the same brand to offer two wireless plans that are identical in all aspects, except that one includes more minutes and a lower monthly fee than the other. Today, experimental design software can avoid such combinations of attributes while still producing experimental designs with high reliability.

Another benefit of improved experimental design software is that survey choice tasks can be made easier for the respondent while still handling complicated products that have many features.

For example, respondent survey choice tasks that elicit a choice from among multiple personal computer brands, where each brand is described by 15 or more attributes, can be quite tiring. Newer software can produce partial profile choice designs that select a subset, say five, out of the total set of 15 or more attributes to present in each choice scenario. Only five attributes vary across brands, so only these five attributes need to be shown to respondents. Showing only five attributes per choice task reduces the amount of time required to read through each scenario and the overall length of interview and increases the quality of respondent choices. Reduced respondent burden offers a huge advantage in today’s lifestyle because survey respondents have many demands on their time and want shorter interviews.

Segment- or customer-level models

Segment-level models have unique parameters for each sub-segment of the total population of customers. During model development, segments of customers (who share similar market responses to changes in product prices and features) are discovered and separate model parameters are produced for each segment.

Customer-level models have unique parameters for each individual customer or survey respondent. Since every individual truly has unique tastes and preferences, customer-level choice models are more realistic. For example, one customer might be very price sensitive and brand loyal, while another might be moderately sensitive to price but not loyal to any brand. Customer-level modeling uses survey responses to: determine the most likely distributions (across customers) for price and brand preference parameters; and estimate each individual respondent’s price sensitivity and brand preferences.

Segment-level models are produced by latent-class (LC) models and customer-level models are produced by hierarchical Bayes (HB) models. However, LC models can deliver customer-level parameters also by assigning respondents to a most probable segment, or by weighting segment-level parameters based on the respondents’ probabilities of segment membership.

LC and HB models use very different statistical algorithms to produce the final model parameters, and in many cases the final results are similar. This author has estimated both LC and HB choice models using the same source data and found similar patterns of responses.

   In general, HB methods enable researchers to investigate more complex decision-making processes. For example, a recent application (Allenby and Gilbride, 2004) applies an HB model with two decision-making stages. First, consumers use a screening process to decide which products to consider. Second, consumers make a purchase decision among the products that are considered. This HB model not only delivers relative preferences for the various product features, but also estimates customer-level threshold values for price and feature functionality that must be exceeded in order for a product to be considered. As you can see from this example, HB gives sophisticated researchers extreme flexibility to try out new models of consumer behavior.

Segment- and customer-level models have enabled companies to:

  • develop new products and services for targeted sub-groups of the total population (based on customer-level model parameters);
  • improve retention and acquisition campaigns by targeting segments or individuals that exhibit high preferences for particular product features (based on customer-level model parameters);
  • test more complete and complex models of purchase decision making.

Calibration of choice models

With really new products - that is, new concepts yet to be introduced to category buyers - choice models based on survey data usually will produce biased results. For example, placing a new product into an existing competitive set can produce a predicted market share that is too low. On the other hand, exposing a new product concept to respondents before showing choice scenarios will almost always produce a predicted market share that is too high.

For existing products, price and feature elasticities can be biased if the survey questionnaire’s choice scenarios provide too much or too little information relative to real market scenarios or omit the impact on market choices of busy lifestyles and attitudes towards change.

Choice models can be calibrated to reduce bias in model predictions. The mathematics behind calibration of choice models can be explained in terms of the random utility model - the most used utility specification employed by practitioners of marketing research. The random utility model assumes that total utility (attractiveness of a product in terms of its attributes) is the sum of a measurable component (systematic utility) and a random component (random utility).

Total Utility of Brand A = Systematic Utility + Random Utility

In their simplest form, choice models specify systematic utility to be a sum of part-worth utilities (worth of each part of the product) minus the worth of the money required to purchase. For example, the total utility for a $2 bottle of Heinz ketchup would be the sum of part-worth utilities for brand name, type of bottle and size of bottle minus the worth of $2.

Systematic Utility = Part Worth of Heinz Brand + Part Worth of Glass + Part Worth of 14 oz. - Part Worth of $2

When using survey responses to estimate part-worth utilities, utilities may be biased. In order to reduce or eliminate bias that causes inaccurate predictions, researchers can calibrate choice models by adjusting utilities to better predict actual market choices.

While all serious practitioners acknowledge that choice models can produce market shares and price and feature responses that differ substantially from those of actual markets, different calibration solutions have been implemented.

Traditional calibration solutions include:

  • Don’t calibrate, but use the choice model results as valuable inputs for strategic and tactical decision making.
  • Calibrate brand part-worth. Adjust part-worth utilities for brands to force a choice model to produce market shares from an external source: for example, scanner data or a forecast.
  • Rescale price or feature part-worth utilities. Proportionately rescale price and feature part-worth utilities based on the relative variability of random utility from survey responses vs. actual market choices.
  • Calibrate brand part-worth and rescale price or feature part-worth utilities. Not only adjust brand utilities but also rescale price and feature utilities.

Several new solutions are being investigated in academic and business circles to improve choice model calibration. First, recent research on rescaling of price and feature utilities includes very detailed comparison of survey choice models with household scanner data (Renkin, Rogers and Huber 2004). This research has focused on how much to rescale price utilities so as to minimize differences between survey choice model and household scanner data model predictions.

A new area of research that offers promise is the use of point-of-sale survey tasks that give respondents an incentive to state their true willingness to pay (Wertenbroch and Skiera, 2002). Info  rmation about respondents’ true willingness to pay can be used to simulate the slope of a demand curve and then rescale price utilities in survey-based choice models.

Based on personal experience, market share predictions for really new products can be greatly improved by incorporating additional survey responses. For example, survey responses that measure positive attitudes about a new brand or product concept statement can be combined with choice model simulations to deliver more reliable first-year market predictions. Further research should be done to validate this approach.

Finally, laboratory experiments have been proposed (Allenby et. al., 2005) to understand the amount of adjustment of brand, price and feature utilities for different types of customers, bringing calibration to the individual customer level.

All of these calibration approaches have as their goal to increase the accuracy and reliability of market share and revenue predictions from choice models.

Implications for marketing researchers

Recent advances can reduce survey length for choice modeling research, increase ROI for target marketing programs and deliver more accurate market simulators to measure bottom-line revenue impacts. These benefits can be realized even for complicated products with many attributes.

Reduced survey length and increased ability to handle complex products results from using improved experimental design software and partial profile designs.

The increased ROI in target marketing programs results from using hierarchical Bayes and latent class choice models to develop individual respondent-level choice models that predict responses to promotions. These individual- or customer-level models can themselves be modeled using traditional database or data mining approaches to populate an internal customer database, and can subsequently be used for targeting individual customers.

Calibrated market simulators that incorporate individual-respondent part-worth utilities, aggregated to market outcomes and properly calibrated, can improve the ability to predict bottom-line revenue impacts of product line extensions and restages.  

References

Allenby, Greg; Fennell, Geraldine; Huber, Joel; Eagle, Thomas; Gilbride, Tim; Horsky, Dan; Kim, Jaehwan; Lenk, Peter; Johnson, Rich; Ofek, Elie; Orme, Bryan; Otter, Thomas; and Walker, Joan (2004). “Adjusting Choice Models to Better Predict Market Behavior,” Working Paper.

Gilbride, Timothy J. and Allenby, Greg G. (2004). “A Choice Model with Conjunctive, Disjunctive, and Compensatory Screening Rules,” Marketing Science, Vol. 23, No. 3.

Renkin, Tim; Rogers, Greg; and Huber, Joel (2004). “A Comparison of Conjoint and Scanner Data-Based Price Elasticity Estimates,” presented at Advanced Research Techniques Forum 2004, Whistler, B.C.

Wertenbroch, Klaus; Skiera, Bernd (May 2002). Journal of Marketing Research, Vol. 39 Issue 2.