Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

Conjoint goes mobile



Article ID:
20140805
Published:
August 2014, page 30
Author:
Gerard Loosschilder

Article Abstract

SKIM's Gerard Loosschilder explores how conjoint and its many positive attributes can be successfully moved to the mobile environment.

Editor's note: Gerard Loosschilder is chief innovation officer at SKIM, an international research firm.

Choice-based conjoint analysis is a member of the family of choice modeling approaches. In choice-based conjoint analysis (CBC), a product is dissected into its constituents or attributes – such as brand, price, features and specifications – each with their specific versions or “levels.” By systematically varying the attribute levels across profiles – a profile is a product description with a specific set of attribute levels to be tested – and repeatedly asking consumers to make choices among profiles, one can study consumer trade-offs and attribute sensitivities. This helps to identify optimal product configurations at price points.

Conjoint exercises can be tedious, with many choice tasks and many options per task. The tediousness is a function of the number of attributes and levels, along with other requirements in the research design. This was a manageable issue when respondents were sitting at their computers. However, in a mobile world, surveys compete with more exciting activities that one can do on a mobile device. Further, consumers have more demands on their time and attention, reducing their interest in completing time-consuming conjoint tasks. This resistance may negatively impact response rates and data quality.

The resistance to tedious or complex tasks on mobile devices is a significant challenge for researchers. One remedy is to simplify the number of choice tasks and the number of options per task. Today, the common number of choice tasks is 10-16, with five product profiles per task to choose from – in other words, a 16-by-five research design. To accommodate mobile users, we have been able to reduce this to a three-by-three research design in simple studies: three choice tasks with three product profiles to choose from (Figure 1).

The three-by-three research design is consistent with a trend toward making mobile surveys modular, spreading questions across respondents and allowing for sample-level analysis by pooling data across respondents. “Simple studies” means a limited number of attributes and levels, no alternative-specific research designs and no prohibitions. Because the information collected from every respondent is limited, we recommend taking additional precautions, such as:

Increase the sample size. Early results of research on sample sizes needed to obtain valid results if the number of data points collected per respondent is low (as would be the case with these simplified designs for mobile users) show that we should increase the sample size by a factor of up to 10 to deliver stable results.

Use prior information. Traditionally, one only looks at the orthogonality and efficiency of a research design. However, if one collects fewer data per respondent, one has to be more cautious to use all information one can get, also basing the research design on prior information, e.g., market shares of existing brands and products and preference ladders known beforehand.

Information overload on small screens

Screen real estate is an important challenge for mobile surveys because of the potential for information overload on small screens. Information presented should be as focused on the task at hand as possible, balancing between product profiles, questions and answers and survey progress information. We suggest using HTML5-based interaction designs that render effectively in common browsers such as iOS and Android. In addition, we recommend applying these formats across desktops and laptops as well as tablets and smartphones.

One way to address information overload is to reduce attribute-level descriptions to a minimum, ideally to a single word or an iconized visual. However, this simplification should be done carefully as it may result in unacceptable task simplification, limiting the ability to represent actual market situations. One suggestion for how to reduce the likelihood of problems is to conduct an interaction design exercise; having text say or a visual show what it is supposed to convey – nothing more – and what to put on a main screen and what to push to a deeper link. Another tool is to use virtual shelves as the vehicle for choice tasks; keeping exercises and attribute-level information as close as possible to the way it would be in a store.

Mobile max-diff

Mobile is an excellent platform to run a popular research method like maximum difference scaling (max-diff). Max-diff is a very powerful yet easy to administer method to rank order items such as claims and benefit statements or anything that can be compressed to a single item or sentence. Combining max-diff and mobile helps us to kill two birds with one stone.

First, max-diff can be very beneficial to emerging markets in Asia and Africa. That’s because it is insensitive to respondent rating scale usage, which helps us to produce actionable results regardless of cultural biases. Second, mobile helps by moving away from tedious methods such as CAPI in central-location testing and door-to-door surveys. Those emerging markets may even skip the step of traditional online panels and surveys administered on home computers as we see them in established markets in North America and Europe, shifting directly to mobile.

Hence, combining mobile and max-diff could mark a paradigm shift. In studying the ability of mobile max-diff to generate the same results as max-diff using CAPI and CLT, the results are promising and they can be produced in a quarter of the fieldwork time or shorter. Even if one cuts back the number of max-diff choice tasks from 15 in the traditional study to four or five in the mobile version (Figure 2), results, conclusions and recommendations are the same – as long as there is a sufficient increase of the sample size.

One (temporary) caveat is the immaturity of mobile samples. If results are different between traditional online and mobile, the differences can partly be traced back to differences in the sample composition. One sees in both established and emerging markets that “traditional” and “mobile” attract very different “random” samples in spite of the same screening criteria. Mobile versions usually attract a younger, more affluent and higher-educated sample. This may change over time with smartphone penetration but for now it presents a problem. One could determine to set soft quotas or weight the results of the mobile version to better represent a traditional sample. However there is a reluctance to do so, because one could argue that both fieldwork methods attract a different section of the population, both misrepresenting it to an unknown extent.

Context-specific research

Mobile conjoint can deliver new benefits not possible before mobile users became ubiquitous. For example, study lead times may become shorter because consumers use their phones everywhere and all the time, so they can take surveys more easily. Also, one can make surveys context- and location-specific by only asking questions of those to whom it is relevant, e.g., those who just passed an aisle, are in front a shelf, just passed an outdoor poster or were exposed to a promotion. Of course, respondents have to give permission to use location data. The goal is to assess the effect of context variations on the choices consumers make. For example, we may use conjoint analysis to study the effect of a sales conversation on the likelihood of brand churn or the impact of aisle end caps on brand perception.

Another area of great opportunity involves the use of mobile research to replicate mobile retail environments. If one is using max-diff or conjoint to test elements like claims and benefit statements, they can now be tested in the context of an online retail store. It can deliver fresh insights into online shopping behavior.

For example, as shown in the two parts of Figure 3, a pilot application was recently designed to replicate an Amazon Web store as part of a choice-based conjoint study. Several elements were turned into attributes and levels. Some elements were in the control of the personal care brand (e.g., product and package designs), some were in the control of the e-tailer (e.g., pricing, callouts, shipping options, position in the list) and some were in the control of neither (e.g., consumer ratings). For example, the results can help the brand understand how to compensate for a negative rating or what priority to negotiate for in the order on the site. Task designs like this allow researchers to directly connect with the relevant target group, inspiring them to behave the same way in the survey as they would on the original e-tailer site.

Mobile conjoint analysis is already being used to do simpler studies. Mobile simply opens up new opportunities for context- and location-specific research and replicating mobile shopping behavior. Future research needs to show how we can extend mobile conjoint to address more complex study objectives.

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

INTERNATIONAL SHOPPER INSIGHTS IN ACTION
November 3-5, 2014
IIR will hold its international shopper insights in action event on November 3-5 at the Sheraton Grand in Edinburgh Scotland.
WEBINAR: UNDERSTANDING TODAY"S HISPANIC CONSUMER - FROM POPULATION NICHE TO DEMOGRAPHIC POWERHOUSE
November 6 at 1 p.m. CST, 2014
Survey Sampling International (SSI) will host a one-hour complimentary Webinar on November 6 at 1 p.m. CST. Travis Miller and Jackie Lorch will present.

View more Related Events...

Related Articles

There are 1490 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

2009 shakes up America's favorite movie stars
America's top 10 favorite actors include some who have stood the test of time and some who are hot right now - especially as defined by the Hollywood Foreign Press.
Research as a profit center? It's closer than you think.
Sugging is perhaps the dirtiest word in our industry but does linking sales to marketing research hold the key to changing the function’s image from a sunk cost to a contributor to the bottom line?
Qualitatively Speaking: How to put qualitative respondents at ease
While the qualitative process is commonplace for the moderator and observers, for many respondents, it’s an unfamiliar and potentially unsettling experience. Using the acronym WALK, the author explains how to calm their nerves and get them into the discussion flow.
Using morphological content analysis to mine insights from qualitative interviews
The author details the business-to-business application of morphological content analysis, which provides a systematic approach to analyzing interview transcripts. An example in the consumer electronics industry is provided to illustrate the main concepts.
Differences in the evaluation of B2B and B2C Web sites
As the Internet’s influence on business has grown, many business-to-business (B2B) sites have been attempting to apply “lessons learned” from the business-to-consumer (B2C) e-commerce experience. Several recent studies have indicated that this strategy is only partially successful. This article discusses differences in the evaluations of B2B and B2C Web sites.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Industries

Conduct a detailed search of the entire Researcher SourceBook directory