Searching for accuracy in self-reported behavior frequency

Stephanie Vance is VP, research and client enablement, at consumer insights company aytm. This is an edited version of an article that originally appeared under the title, “Frequency or fallacy: Let's decipher the intention-behavior gap.”

How many times do you exercise every week? Have a number in your head? Now, how many times did you actually exercise this week? Last week? The week before? Sure, you mean to hit the gym, take a jog or at least stretch, but it doesn’t always pan out. So, do all of those numbers match up? If not, you’ve just exemplified the intention-behavior gap.

As researchers, we often rely on self-reported behavior frequency to understand consumer habits. Yet despite its utility, there are often accuracy issues with self-reported behavior frequency in surveys. So, let’s look at the intention-behavior gap, how it can impact your consumer insights, and how new research can illuminate steps toward more accurate frequency reporting.

Difficulties in frequency reporting

Self-reported behavior frequency data is quite common in online survey research and has broad utility for insights seekers looking to understand the consumer landscape. For example, if consumers report they brush their teeth three times a day, we can estimate they buy a new tube of toothpaste once a month. But what happens if they’re actually brushing an average of 1.3 times a day? In that case, the replenishment rate is vastly different than what we anticipated. This kind of over-reporting takes place time and time again.

Aspirational reporting

Despite the broad utility of quantifying behavior frequency across various consumer activities, there’s strong empirical evidence of significant measurement error in claimed behavior frequency. Often, the most over-reported behaviors are the ones that have salient normative standards, or ones for which people have strong personal aspirations. This over-reporting has been documented across a number of different behavior domains, including:

  • Voting: Citizens self-report more frequent and consistent voting behavior than public voting records show.
  • Church attendance: Church-goers self-report more frequent church attendance than church attendance records suggest.
  • Exercise: University students report more frequent exercise than is supported by sports facility records.

Encountering bias

Acquiescence bias

Common explanations for such over-reporting often point to acquiescence bias, which influences survey respondents to be agreeable and a “good partner” to the interviewer by supplying the information they think the interviewer wants to receive.

Social desirability bias

Other research suggests that social desirability is at play, whereby survey respondents are overly guided by socially acceptable or politically correct norms when reporting behavior frequency. Both of these biases are rooted in impression management – the powerful set of behaviors and processes we engage in, as social creatures, to manage how others perceive us.

But these biases fall short of a total explanation. In fact, over-reporting still occurs even under the most private data collection modes. As researchers, that should tell us that the intention-behavior gap cannot be completely rooted in impression management mechanisms. In order to understand the gap in self-reporting, perhaps instead we should look at the way we’re asking.

The dual-question approach

In a typical survey set up, we measure claimed behavior frequency in a singular way, by asking “on average, how often do you [x]?” But could this singular measurement be responsible for our respondents conflating their intentions and their behaviors into a single answer?

What if, instead, we ask two questions? Our hypothesis is that offering a separate opportunity to address intentions will allow respondents to give a more accurate reporting of their actual behavior frequency. So we set up an experiment to test our hypothesis.

Experiments in frequency reporting

In our survey-based experiment, we wanted to test the hypothesis that, by giving respondents the opportunity to first express how often they intend to engage in a behavior, they will subsequently report actual behavior frequency more accurately.

Experimental survey design

We asked the participants to evaluate a random subset of 17 behaviors, looking at a variety of different cadences – from things done very frequently, like brushing your teeth, to things done less frequently, like going to see a movie in the theater.

In the control cell, respondents simply reported their behavior frequency across a subset of the 17 domains tested. In the key experimental cell, respondents were first given the opportunity to express how often they intended to engage in the behavior and then reported their actual behavior frequency in a follow up question.

Key findings of the study

Comparing behavior frequency in the control cell vs. the key experimental cell, self-reported behavior frequency was lower/less frequent across the board in the experimental cell. These consistent results demonstrate that by allowing respondents to express their intended behavior frequency prior to reporting their actual behavior frequency, we are able to successfully deconflate intention and behavior. In doing so, we both give respondents the opportunity to express important information (their intentions) and we free them up to more accurately report their actual behavior.

Real-world application with Rocket Mortgage

One of the joys of being a researcher is validating a hypothesis through rigorous testing. And at aytm, we get to work alongside researchers coming from all kinds of backgrounds. Recently, we had the pleasure of running a joint study with Isabel Lenzen – a data journalist with Rocket Mortgage.

Saving behaviors of first-time homebuyers

From previous research she’s conducted, Isabel knew that 38.5% of non-homeowners said that the down payment or closing costs were keeping them from purchasing a home. She was interested in analyzing these financial difficulties encountered by first-time homebuyers during their home purchasing journey. However, she was aware of the potential limitation in relying solely on self-reported frequency data. To enhance the precision of her findings, the study employed a two-question methodology.

Isabel found that on average, individuals purchasing their first home save approximately 22% less than their intended savings goal. This discrepancy results in a monthly shortfall of $130 and an annual deficit of $1,560.

Top reasons for the savings gap are:

  • Other saving priorities: 33%
  • Change in/loss of income: 31%
  • Household expenses higher than anticipated: 27%

Disentangling intended and actual behavior frequency

Taken together, this research shows the importance of disentangling intended and actual behavior frequency reporting in survey research. If you rely on behavior frequency reporting to learn about your customer, consider adding a question in your survey to understand consumers’ intentions first. Not only will you ultimately get more accurate behavior reporting, but you’ll learn something new about your customer (their intentions!) that could have important implications for your business.