Editor's note: Kevin Lattery is vice president of methodology and innovation in the Hoboken, N.J., office of research company SKIM. Jeroen Hardon and Kees van der Wagt are research directors in SKIM’s Rotterdam, Netherlands, office.

Conjoint analysis is a frequently-used methodology for understanding how consumers manage trade-offs during the decision-making process. For example, how will consumers respond if we offer a larger size at a slightly higher price? Will my new product cannibalize sales from my existing portfolio or draw sales from competitors’ products? What if I change the price again? What if my competitors change their prices, sizes, offerings? These are just some of the questions addressed by a conjoint study.

In some cases, we want to examine these consumer trade-offs in a larger competitive space. For example, we may want to understand the market dynamics among hundreds of products with different sizes and prices. Think about the number of soft-drink options or the number of snacks possible. This might include those at your local store, along with many other potential products.

Our ability to program conjoint surveys has improved significantly in the last decade and today we can show respondents realistic simulated shelf sets with many products on a computer screen. But a computer screen is not the same as a store. As the number of products increases, the number of items to put on a computer screen becomes a challenge. And at a certain point the scope of the project becomes unwieldy. Confronted with the limitations of screen real estate in a conjoint survey, one of the alternatives is to use something called evoked sets.

For any given consumer there is a smaller subset of products from which they actually make trade-offs. That’s what makes evoked sets possible. For any individual respondent, many of the products available are simply not in their consideration set. In the soft-drink market, for example, each consumer usually buys from a narrowed down list of brands, flavors and pack sizes despite the fact that there are hundreds of options to choose from. Of course the specific set of items in a consideration set differs across respondents. Evoked sets build on this idea by first finding out what products make up a specific respondent’s consideration set and then building a custom conjoint task. For respondents, it’s like walking into a store with a subset of products customized just for them.

Fielding a conjoint study with evoked sets means one must be able to design conjoint screens that can be customized for each respondent. This in turn can be a challenge for the survey programmers who must take a custom list of products and make it real on the computer screen at runtime, without the benefit of a human to pause and clean things up. This article will not address the challenges in survey programming. Instead, it will focus on how evoked sets also require a well-thought-out approach to experimental design and expertise in analysis.

Reduces respondent fatigue

In creating an evoked set, the goal is to select all potential products that are relevant for the respondent. So it is best to avoid excluding potential products too hastily. In the ideal case, we ask about all of a respondent’s consideration set, only eliminating those products which they would never consider anyway. If this is done correctly, one may actually be getting better data, as it reduces respondent fatigue that occurs when confronted by a lot of extra noise (useless choices for a respondent).

Asking respondents about their consideration set can be done in many ways and depends upon the topic of study. One approach is to use questions about past behavior:

Which of these products have you purchased in the past three months?

Which of the following products did you consider purchasing in the past three months?

Which of the following products would you consider purchasing in the next three months?

Another approach is to ask about future intentions:

  • Which of these products would you consider buying on your next shopping trip?
  • Which of the following products are you most likely to consider buying in the next three months?
  • Which of the following products would you never consider purchasing?

Sometimes one can ask more strategic questions about the brand, size or features a consumer needs. For instance, in shopping for refrigerators one might ask whether they have size restrictions. One might also ask whether there are certain configurations that are unacceptable (for example, maybe they won’t consider a freezer on the bottom).

It is common to ask multiple sets of questions to get at the evoked set. This helps us to avoid dismissing products too quickly. Then only place a product outside the evoked set if it is outside the consideration for all the relevant questions.

No matter how careful one is, it is entirely possible a consumer will actually buy a product, even when they say that they would never consider it. This has been confirmed many times using survey data. A respondent is shown products they said (even multiple times) that they would never buy, yet they still choose them. It appears that screened-out products are highly undesirable but under the right conditions may still be chosen. By analogy, your neighbor might say that his house is not for sale. But if someone knocked on his door and offered him twice what he paid for it, he might make it happen. In other words, stated screening rules are not perfect.

Because the respondent’s stated considerations are not perfect, one can supplement the set of stated consideration products with additional products. In fact, it is always wise to add a few random products to the set. If there is still room for the respondent to evaluate more products, then consider adding products that are similar to the products in the respondent’s set. This assumes, of course, a baseline understanding of which products are similar to one another and frequently cross-purchased.

Sometimes the set of products evoked is still too large. A few respondents appear to be open to almost anything. So the survey questions designed to screen out products may leave us with almost as many as we started with. In these cases, if there are several screening questions, one may prioritize which to use to form the consideration set. For instance, one could use only those products they have purchased in the last three months. Even then, one should still supplement that list with other products chosen randomly. In the end one might have to randomly select products from the larger set of products initially created for the respondent.

Random selection of the products to be tested for a respondent is not all bad. In fact, some researchers prefer just doing a random selection of products rather than developing a customized evoked set. Indeed, from a theoretical point of view, random selection is better because it avoids so-called selection bias. During analysis of the data, it becomes clear how non-random selection of products introduces a challenge. So why not use a purely random selection of products? From a respondent point of view, the conjoint task may seem boring and irrelevant. Respondents may be choosing from a set of products that they care nothing about. This can induce boredom with the survey and more random choosing. Evoked sets make the conjoint task more relevant and engaging. The resulting selection bias can be weakened by using multiple questions to create the evoked set and eliminating a product only when it fails across all questions. Supplementing the respondent’s consideration set with a random selection of additional products further reduces selection bias.

Requires expertise

From the managerial standpoint, the key thing to know about analysis of evoked conjoint is that it requires expertise. Evoked conjoint is much more difficult than standard conjoint to properly analyze. Here are some of the reasons this data is more difficult to analyze, as well as tips for overcoming the challenges.

What makes evoked sets data more complex?

Evoked data sets are almost always sparse. If there were only a few products, one wouldn’t need evoked data sets. So there are typically lots of products. This means there are a lot of parameters and only a few choices. In some cases, one there could be 200 or more parameters involved. Moreover, this sparsity is often compounded because each choice typically involves just a few attributes, like SKU and price. In a traditional conjoint, each choice gives us information on many parameters. Given the large number of parameters and relatively small amount of information, it becomes very easy to overfit the data.

Hierarchical Bayes (HB) is probably the most common method for analyzing conjoint data. Assuming that one uses HB, the sparsity of evoked set data requires one to adjust the prior parameters. In other words, this means adjusting the parameters so that HB will “borrow” more information from the total sample. More technically speaking, one will typically lower the prior variance and increase the additional degrees of freedom to give more power to the upper level covariance model that supervises HB. We strongly recommend testing different parameters here but in our experience prior variance should be much lower than 1 and typically less than .5.

The complete list of items tends to yield natural groupings or what is called a nested structure. Respondents tend to trade-off among some similar items more than other items. For instance, when one lowers the price of Product A with the largest size, it may draw more demand from Product A with smaller sizes. Likewise, there is more similarity within brands or within subcategories (like diet vs non-diet). Capturing these nested structures can be a challenge and is not something learned in basic conjoint training or available in most packaged software.

To overcome this challenge, one can estimate models using nested logit. This is a standard approach well-documented in the academic literature for handling correlated alternatives. It introduces an additional parameter for each nest of items that represents something like the degree of correlation among the products. This additional parameter is derived from the data and when there is no correlation among the items in the nest, it reduces to the standard logistic model. Currently, nested logits are difficult to execute well in HB. One can use either latent-class or penalized respondent regression (more detail below). These methods use standard logistic regression methods and can be easily modified to accommodate nested logit. One final caveat: It is often desirable to estimate several different nested logit models, each with a different way of grouping products. Then average the predictions over these models, rather than assuming only one model. This is called an ensemble approach.

Most likely the biggest challenge is how to incorporate the respondent’s specific evoked products into the modeling. The raw conjoint data only shows that certain products were not shown to certain respondents. So the model will assume the missing products are just missing at random. But that is far from the truth: The products are missing because they are not likely to be chosen by the respondent. Informing the model that missing products are undesirable vs. random is crucial. Of course, this is not a problem when one uses random selection of products rather than evoked.

There are several ways to overcome this. One of the easier and relatively effective ways is to add synthetic data to the set of conjoint tasks actually shown. This means one constructs data in order to inform the model about the products excluded and included. A simple example is to pretend that we showed the respondent all of the excluded alternatives (even though we did not) and of course tell the model that none of them were picked. This helps, but a better approach is to add synthetic binary tasks that show the product versus an anchor, with the consideration products winning and the excluded products losing. Using HB, this addition of binary choice synthetic data introduces some other complexities not covered in this article.

In some cases, one can derive better results using methods other than HB. For instance, penalized respondent-level regression (like Frischknecht et al. 2014) can often work well. The advantage of penalized respondent regression is complete control over each respondent. This means for instance that one can solve the problem of excluded alternatives by directly telling the model to estimate betas only for this specific subset of parameters for this respondent. One can also include other respondent-level information directly, such as preferred products, or nested structures. In general, it is preferable to analyze evoked conjoint using a broad toolkit adapted as needed based on the study.

Customize the marketplace

Sometimes we want to understand the marketplace dynamics of many products. This can be challenging to investigate in a survey with limited screen real estate. One solution is to customize the marketplace for each respondent. Rather than showing all the products, we customize the marketplace for each respondent, showing those products that are most relevant. We also want to show some additional random products to add real-world noise. The resulting conjoint survey does not have as much noise as the real world. But as a survey it is more doable and engaging than one with cluttered screens of mostly irrelevant products.

Evoked conjoint studies require more work and analytical expertise. The analytical challenges with evoked sets are current topics of discussion at analytical conferences and further reading is suggested below. But in the end one can construct respondent-specific consideration sets, understand the trade-offs within those sets and build on those to create a full understanding of marketplace dynamics.

 

REFERENCES AND FURTHER READING

Carson, Richard T. and Jordan J. Louviere. (2014) “Statistical properties of consideration sets.” Journal of Choice Modeling. 13: 37-48.

Belyakov, Dmitry. (2015) “Precise FMCG market modeling using advanced CBC.” Proceedings of the Sawtooth Software Conference: 261-266.

Eagle, Thomas. (2015) “Selection bias in choice modeling using adaptive methods.” Proceedings of the Sawtooth Software Conference: 261-266.

Fader, Peter S. and Bruce Hardie. (1996) “Modeling consumer choice among SKUs.” Journal of Marketing Research 33 (November): 442-452.

Frischknecht, Bart and Christine Eckert and John Geweke and Jordan Louviere. (2014) “A simple method for estimating preference parameters for individuals.” International Journal of Research in Marketing 31(1):35-48.

Lattery, Kevin. (2009) “Coupling stated preferences with conjoint tasks to better estimate individual level utilities.” Proceedings of the Sawtooth Software Conference: 101-109.

York, Sue and Geoff Hall. (2000) “Using evoked set conjoint designs to enhance conjoint data.” Proceedings of the Sawtooth Software Conference: 101-109.