Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

What's hot in conjoint: A recap from the 2013 Sawtooth Software Conference



Article ID:
20131225-1
Published:
December 2013
Author:
Chris Fotenos

Article Abstract

SKIM's Chris Fotenos reports back from the 2013 Sawtooth Software Conference with an overview of the themes and popular topics covered regarding conjoint analysis, including how to make conjoint more engaging, how to take conjoint mobile and how to augment conjoint with other data sources.

Editor's note: Chris Fotenos is a project manager at SKIM Analytical, a Hoboken, N.J., research company. He can be reached at 201-685-8260 or at c.fotenos@skimgroup.com. This article appeared in the December 9, 2013, edition of Quirk's e-newsletter.

Sawtooth Software, an Orem, Utah, developer of discrete choice modelling software, held its 17th U.S.-based conference this past October in Dana Point, Calif. Many of the top minds in conjoint gathered to share their learnings and progress in the field. Perhaps most importantly, they debated the future of discrete choice methods. There weren't necessarily any game-changing topics brought to the forefront, yet much of the discussion focused on questions all conjoint researchers have been - and must be - thinking about to continue extracting relevant insights from modern consumers: How can we make respondents more engaged in our surveys? Can conjoint go mobile and still provide high-quality results? And, as in any industry, how can we do more with the data we collect?

Increasing engagement

Chances are that anyone who has ever gone through a set of conjoint choice tasks did not find it to be the most exhilarating part of their day. However, with respondent attention spans shrinking every year, the need to make our conjoint studies more engaging has never been greater. Increasing engagement can come in two ways: by making the exercises themselves more interesting or by allowing respondents to take surveys whenever it is most convenient for them.

Fun

The first solution was confronted head-on by Jane Tang from Vision Critical in her presentation "Can Conjoint Be Fun?" Aside from the financial incentives that respondents receive for taking surveys, Tang noted that panelists find it important for the survey itself to be high-quality in its execution. They like to feel that their responses are valued and taken into consideration by the end client.

To touch on these two points, Tang shared results from a study on women's preferences for their male partners. She found that including a choice tournament-type exercise (think of it like an NCAA bracket in which only winning concepts move on to later questions) after a set of choice-based conjoint (CBC) tasks can improve the accuracy of results. While the extent of the improvement is inconclusive, it is undoubtedly good news for other tournament-style choice exercises like adaptive choice-based conjoint (ACBC). To build on this, SKIM proved ACBC works quite well in complex markets, even with simplifications.
 
Additionally, Tang found that sharing results with the respondent, in this case the features of their predicted ideal partner, leads to more people commenting that they had fun while taking the survey. It is encouraging to see that we can take something we have readily available, in this case the data the respondent just gave to us, and use it to improve the survey-taking experience. It is our main job to show our clients that we listen to the voice of the consumer and decipher the hidden meaning. So it doesn't hurt to pull back the curtain every once in a while and show the respondent that we hear them too - after all, the best conversations go two ways, not one.

Including this type of feedback mechanism would bring our surveys closer to the polls Web surfers see and take every day - voluntarily, no less! - on news sites and social networks. Pushing people to think of surveys in the same realm as some of their favorite online activities may be necessary for maintaining a sizeable sampling population. No matter how advanced methodologies become, a representative sample will always be essential.

Realistic

Next to making conjoint exercises more fun, many researchers aim to make them more realistic. Several presentations focused on crafting choice exercises that mimic the actual choice environment as best as possible. Karen Fuller from HomeAway, an online marketplace for the vacation rental industry, shared an example of how her company ran a menu-based conjoint study to test pricing for account subscriptions offered to renters on its Web site.

In the study, the choice tasks were programmed to look very similar to the actual HomeAway Web site, as was its simulator. As a result of the study, the company increased trade-up to higher-price tiered subscription options and thereby increased revenue. While it is difficult to say if the success of the study was due to the visual quality of the exercise, it is safe to assume that respondents were more engaged in the exercise given their familiarity with the process.

It may be easier to replicate an online shopping environment in an online survey, as opposed to putting respondents in the mind-set of an in-store purchase. Yet, creating a sense of familiarity can be crucial to a respondent's understanding of the exercise itself. If a respondent has a difficult time understanding the exercise, it will make it difficult to extract meaning out of the data. Take for example a typical portfolio pricing study in the fast-moving consumer goods industry. To a respondent, this means walking down an aisle, checking price tags, picking items off of the shelf and putting them into the shopping basket. As Peter Kurz and his colleagues shared in their presentation, titled "Research Space and Realistic Pricing in Shelf Conjoint," many research firms have already moved toward a more visual shelf-layout style of conjoint. It seeks to replicate this type of in-store shopping behavior in a virtual environment. Whereas more traditional studies in this field would show items as single units on the shelf, these studies cascade products across the shelf and allow respondents to drag and drop items into their baskets.

While this increased realism is nice to show to clients who may be skeptical of conjoint analysis, one may ask if it really improves the quality of data. Perhaps the better question is if we can collect different information with these types of studies and thereby answer other questions for our clients. For example, testing different shelf layouts may not have made sense in a more traditional conjoint study but if we can virtually create shelf layouts for respondents to experience, it opens the door to gaining insights quicker and cheaper than an in-person study.

Shifting to mobile

More important than making sure respondents are engaged in our studies is ensuring that they continue to take our studies and provide high-quality answers to our questions. It's a given that the marketing research industry is shifting to mobile as consumers increasingly allocate more and more of their attention to the platform. It is also quite apparent that conjoint is possible on a mobile device - it's just a matter of adjusting the look and feel of the exercise to fit on a smaller-sized screen with alternative inputs. The question for conjoint, then, is whether the methodology can provide the same quality results on a mobile device as it does on a more traditional platform like a laptop or desktop computer.

Several conference speakers tackled this question, with encouraging results. Chris Diener of AbsolutData looked into mobile conjoint reliability with a study on tablet preferences in the U.S. and India. Alongside several other simplified methodologies, Diener compared mobile CBC with two or three concepts on the screen at once to PC-based CBC with three concepts on a screen at a time. In both countries, mobile CBC with two concepts simultaneous on the screen produced holdout task hit rates that were significantly better than PC-based CBC. By comparison, mobile CBC with three concepts simultaneously on the screen was in line with its PC counterpart. Despite the increased hit rates, Diener saw longer interview times for respondents on mobile devices and generally lower enjoyment ratings for the exercise. So while this was a pleasant story for the accuracy of mobile conjoint, there is obviously still room to improve the respondent experience.

Joseph White of Maritz Research also looked into this question through the lens of a study on tablet preferences in the U.S. Like Diener, White saw slightly higher hit rates for respondents taking the exercise on a mobile device versus a PC. He also observed that predictions for tablet users were roughly on par with that of a PC. Despite finding similar accuracy across mobile devices, tablets and PCs, the results provided by each of the alternatives showed differences with respect to the importance of different attributes. Respondent sensitivities, as well as preferences for brand and screen size, differed across the three devices. In addition, mobile and PC respondents tended to be more price-sensitive than tablet respondents.

For starters, it is important to remind ourselves that these studies were centered on preferences for the very devices they aimed to compare. It is very likely that mobile and tablet respondents have a clearer idea of their preference for tablet features than PC respondents. This may lead to more accurate results even if they might have been hindered by their smaller screens. 

White also performed a second case study on partner preferences in relationships. He found very similar results, both in terms of hit rates and preferences, across all three devices. While this may quell some fears on the validity of the first comparisons, it raises another question on what exactly we should be comparing across devices.

If we can believe that the results for PCs, tablets and mobile devices will be accurate, then we can have faith in allowing respondents into our surveys through all three channels. Depending on the topic of interest though, we need to be very careful about how we sample across all three devices. In the end, our clients want answers to their questions about consumer preferences. If consumer preferences differ across users of each of these devices for the category of interest, then our sampling criteria must reflect the composition of the consumer base.

For example, in White's study, preference for the Apple brand was much higher among tablet respondents. They are by no means the only people who will be buying tablets this holiday shopping season. So any forecasting study should take that into account in its sample composition. It is important to be accurate in our predictions of individual respondent preferences but all of that is for naught if it adds up to a model that fails to predict the overall market.

Augmenting conjoint data

Several studies showed that mobile conjoint research can be as accurate as similar research performed on a PC. But it remains to be seen if mobile can handle much more complex studies, such as those with copious attributes/levels or those that are more visually complex. The solution to this potential issue could lie in augmenting conjoint data with data from other methodologies.

Representatives from The Modellers, TNS and Millward Brown shared results on experiments augmenting max-diff data with the results of a Q-sort exercise. The key finding was that while augmentation can provide greater insight at an individual level, at an aggregate level results were generally in line with those of the stand-alone max-diff data. Thus, augmenting discrete choice data with data from another methodology may not be the best use of the researcher's or respondent's time. Instead, it may be optimal to complement discrete choice data with data from alternative sources, such as scanner data or social media.

It is fairly common practice in the conjoint world to go beyond the utilities derived from choice tasks and apply factors such as distribution, purchase frequency and volume-per-purchase to simulation models. These enhancements come at a product or concept level but are not necessarily enriching the data at a respondent level. Model covariates or using prior information in the estimation of utilities comes to mind. However, the source of this information does not need to come directly from a survey. With big data sources growing bigger and bigger every day, perhaps it's time for conjoint to expand its network even further outside of surveys to continue developing as a methodology.

Easily be applied

The learnings from the conference can easily be applied to the daily practices of any conjoint researcher. It was great to see the hard work and passion put forth by each of the presenters and the debate they spurred. Moving forward in the mobile world, conjoint is here to stay.

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

RIVA COURSE 501: FACILITATION - PRACTICAL TOOLS, TIPS AND TECHNIQUES
August 11-13, 2014
RIVA Training Institute will hold a course, themed 'Facilitation - Practical Tools, Tips, and Techniques,' on August 11-13 in Rockville, Md.
BURKE DESIGNING EFFECTIVE QUESTIONNAIRES: A STEP BY STEP WORKSHOP
August 12-14, 2014
Burke Institute will hold a workshop focused on designing effective questionnaires on August 12-14 in Las Vegas.

View more Related Events...

Related Articles

There are 2515 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

10 reasons why you should go mobile right now
The authors offer a mobile research manifesto, addressing some common complaints against mobile and outlining the many factors in its favor.
Improving continuous improvement with maximum difference scaling
To perform better, companies need to know what’s most important to consumers. The author outlines how to use max-diff to identify improvement priorities.
Forget exact science: Drawing conclusions from observational research
While most marketing research is observational in nature, we also conduct experimental research. Each has important advantages and disadvantages that are frequently overlooked and this article addresses them.
Data Use: Understanding conjoint analysis: predicting choice
Using a golf-ball example, author Joe Curry explains how conjoint analysis can be used to determine the features and pricing of a new or reconfigured product.
Six questions to ask your supplier about multivariate analysis
The author presents six questions that consumers should ask suppliers of multivariate analysis. Issues addressed include the cleaning and handling of data, the program and process for analyzing the data, the presentation of the final project, and the potential for repeat analyses.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

request
06/06/2014 by Monika Kunkowska
TURF excel-based simulator
04/17/2014 by Giovanni Olivieri
XLSTAT Turf
04/10/2014 by Felix Schaefer
TURF excel-based simulator
03/25/2014 by Werner Mueller
Question writing in which person--I or You?
03/17/2014 by Shalan Gilmeister

View More