Editor’s note: Based in Hoboken, N.J., Sean O'Connor is a manager at global consumer insights agency SKIM.

Marketers are realizing that the rapidly changing digital landscape requires new methods to accurately assess how today’s consumers think and behave. To evoke everyday uses of mobile technology, these new methods should be engaging, replacing deliberate questions with fast-paced, intuitive exercises.

In Asian cultures, for example, using mobile research that leverages response latencies can help reduce biases that are prevalent in traditional research techniques, providing a more accurate representation of consumers’ true preferences.

Not better left unsaid

Many Asian cultures are characterized by high context effects. In other words, in social settings much is unsaid. Acquiescence and embeddedness are also prevalent in these societies and can distort survey findings. These effects are known as acquiescent response style (ARS) and socially desirable responses (SDR). ARS is the tendency to agree with propositions in general, regardless of their content, while SDR is defined as the propensity of respondents to answer questions in a manner that they expect will be viewed favorably by others. These effects are particularly strong when deliberate judgments are sought and questions are administered by an interviewer who is physically present.

Over the past two decades, psychological research has deepened our understanding of human decision-making and raised serious questions about widely-accepted research techniques. Daniel Kahneman’s Thinking, Fast and Slow, published in 2011, is probably the most significant popularizer of the “dual processing of information” theory, in which System 1 refers to fast, automatic and intuitive judgments and System 2 is slower and more analytical processing. The distinction is significant but the systems often work together and considering either in isolation risks overlooking important elements of the decision-making process.

Uncover conscious and unconscious drivers of actual behavior

Traditional research methodologies tend to activate cognitive processes in the brain, creating a bias toward rationalized, deliberate outcomes that can be incomplete reflections of actual consumer behavior. However, the proliferation of smartphones allows market researchers to use new methods that bridge conscious and subconscious drivers and produce practical, actionable results. One way to do this is through the use of response latency (response time) measurements. There are two general ways that response time is used:

  • It is used to measure associations based on tasks where participants need to quickly and correctly categorize stimuli. Two examples of this are priming tasks and implicit association tasks (IAT). Each measures the strength of associations between items. In priming tasks, it is between the prime (e.g., brands) and two categories (e.g., positive and negative). Whereas in an IAT it is the strength of association between two pairs of two categories, e.g., Coke or Pepsi and positive or negative.  
  • It is used as a measure of strength of preference in judgment or choice tasks. In judgment tasks, respondents are asked to make a binary judgment about a stimulus, e.g., like or dislike. In this case, response time is used as a measure of strength of preference for the decision. Similarly, in choice tasks, respondents are asked to choose between two or more items and response time is used to model the gap in preference between the options.

When using these types of methods, and especially when done on a mobile device, it is important to:

  1. Normalize response time at a respondent level, as there are differences in response times between individuals.
  2. Utilize quality-control measures, as these studies are done on a mobile device in real-world settings and not in a controlled lab.
  3. Make sure you are modeling signal and not noise when modeling response time, as not all response times are meaningful.
  4. Consider respondent fatigue. While these tasks can be intuitive and engaging, some methods can require a large number of tasks and can become repetitive and induce respondent fatigue.
  5. Consider the type of stimuli being tested. In general, the tested stimuli should not be overly complex.

These methods can be used for various research purposes, such as idea screening, (short) claims testing, packaging, line optimizations, etc. However, there are limitations for each method. As such, it is important to understand when choosing between the different methods available whether they can answer and are appropriate for your specific research question.

In addition to its popularity and familiarity, mobile technology can both eliminate the need for an interviewer and allow the participation of people who may have been underrepresented in traditional online panels.

Case study: Shampoo claims in Asia and Australia

To determine the effects of the new mobile methods on the response biases of interest, a study presenting claims about shampoos was conducted with samples from India, Singapore and the Philippines (representing variation across Asian cultures) and Australia, a more Western culture where response biases were expected to be less pronounced. The shampoo category was used because of its universal appeal and high penetration across markets. Claims were selected to represent product characteristics that could trigger different responses across cultures. All were presented in English, commonly used in each of the countries, to maximize the comparability of the results and minimize any effects of translation.

Our team used a mobile approach, which was done using the second of the two methods mentioned above and was found to mitigate the bias produced by acquiescent response style bias in the three Asian countries when compared to traditional methods such as rating and max-diff. In Australia, no significant adjustment towards a more consistent mean could be observed. It is important to note that these findings are preliminary and the complexity of cultural differences in response styles clearly calls for more research.

Claims that could be identified as highly socially desirable within the various cultures did not perform as well in this study compared to traditional methods and less socially desirable claims performed better on average. In other words, the responses of the individual are more likely to deviate from social desirability when the new method is employed. This result suggests that the new approach filters out some response bias, more accurately measuring what consumers are really thinking. The more consistent performance of claims across methods in Australia suggests that social desirability bias plays less of a role there and the swiping method has less to “correct” for.

The last component of the study asked respondents to compare this method to traditional approaches. The baseline score for the traditional methods was relatively high, possibly because the survey was shorter than most. Nevertheless, in three of the countries the new method produced significantly higher engagement levels.

Overall, the results support the hypothesis that traditional methods such as rating and max-diff favor stimuli that are socially desirable because they rely more on rational processes of the brain. When the need for these rational processes in answering questions is reduced, so are certain types of biases. Consumers appear to appreciate this new way of conducting research, which keeps them engaged and provides information that is both more meaningful and more reliable.