Editor’s note: Chris St. Hilaire, cofounder of MFour Mobile Research, Irvine, Calif.

A new study conducted in Australia and published in the June issue of Quirk’s says that online surveys taken on smartphones were such a turn-off for respondents from Down Under that a third of them failed to complete all the questions – doubling the drop-off rate among participants who took the same 15-minute surveys on a desktop or laptop computer.

The smartphone respondents’ experience was slow and frustrating, and smartphone screens may simply be too small for a prolonged survey-taking experience, writes Philip Derham, the Australian MR firm principal who conducted the experiment and recounted it in his article, “Are smartphone users less-engaged survey-takers?”

The article raised a collective eyebrow here at my office.

Derham’s findings accurately document the manifest flaws inherent in online mobile research. But his study did not take into account how mobile has advanced far beyond the online approach he tested. This oversight reminds me of the need for greater awareness that researchers are using a shadow of what the best all-mobile technology can accomplish.

The Australian study documented a 34 percent drop-off rate among respondents who began a 15-minute online survey on smartphones. In my experience it’s possible to realize smartphone drop-offs of less than 5 percent for the same survey length. The key is to avoid any vestige of the online process, failures that were evident in Derham’s study, and embrace native app technology that eliminates the need to establish and maintain a signal connecting the survey-takers’ smartphones with the Web sites where online surveys reside.

Technologies and panel recruitment methods can differ radically between flawed and optimal approaches, yet all are lumped in the same mobile category. There is, in fact, good mobile and bad mobile, and the study published in Quirk’s uncovers precisely what’s wrong with the bad stuff.

Let’s be very clear: I have no gripe with Derham or Quirk’s. To the contrary, I’m pleased that the problems with flawed mobile surveys have been raised in a commendable spirit of inquiry into a subject that’s crucial to the future of market research as society leaves PCs increasingly behind. We all acknowledge that we’re now in a new era in which information and communications are exchanged primarily on mobile devices.

I’d like to continue the inquiry by laying out the differences between online mobile that keeps its feet in the past, as reflected in the Australian study’s results, and native-app mobile technology that looks forward to the future. The available advances radically improve all aspects of the process, from panel recruitment to respondent experience, data acquisition and analysis.

I will start by digging into Derham’s findings and how they would have differed had his study employed state-of-the-art mobile technology and panel-recruitment instead of the flawed online approaches now commonly in use.

Derham reported using the following methodology:

He fielded a 15-minute online survey to 14,111 people who had participated in previous surveys by Derham Marketing Research. Of the panelists, 79 percent took the experimental survey on personal computers – laptops or desktops – and 21 percent took them on mobile devices (12 percent on smartphones, 9 percent on tablets).

“Smartphone users were far more likely to start and stop than were desktop or laptop participants,” Derham writes, a 34 percent drop-out rate for smartphones, compared to 16 percent for desktops and laptops. Smartphone users were turned off, he adds, even though the surveys were optimized and designed expressly for smartphones.

“Key takeaways … are that smartphone users are less engaged with online survey invitations, take longer to complete online surveys, and, if they start, are more likely to drop out partway through,” Derham concludes.
 
I italicized online survey in the previous paragraphs because unwillingness to accept that online surveys are over and done is the real problem our industry is facing. State-of-the-art mobile research is a way of escaping the dim fate that awaits the industry if it sticks with online methods.

Let me put it this way: fielding an online survey to a smartphone is like pumping low-grade crude oil into a Lamborghini and then expecting it to win the race. In the world of mobile research, anything that has to do with “online” is technological junk, and inevitably will produce junk results.
 
Derham quotes Simon, a typical participant in his study. Simon would like to take surveys on his smartphone during his train commute but can’t because “we … often lose contact.” His cell phone signal drops out when the train hits a dead spot, making Simon one of many in Derham’s study who attempted to take surveys on their phones and gave up in frustration.

Let’s dive deeper into what appears to have happened.

Because this was an online survey, Simon would have been recruited with an old, online method – an e-mailed invitation. The problem is that most mobile device users now pay more attention to texts than to e-mails.

After reading the e-mail and deciding to participate, Simon would have clicked on a link taking him to the survey Web site. His ability to take the survey without difficulty and frustration would then depend on a bit of luck – avoiding the balky Internet connections or data limits that can beset smartphones, resulting in slow downloads and data exports.

Each question and response in the survey Simon took would have required a separate over-the-Internet exchange between Simon’s smartphone and the online survey site. For a 20-question survey, this exchange would need to happen smoothly 20 separate times.

Simon’s experience was at risk of becoming excruciating at any moment – and, obviously, it did. His survey would have been lost whenever his commuter train went into a tunnel or the tracks wound through steep terrain, interrupting his connection to the Internet. Who could blame him for stating his disappointment and thinking that smartphone surveys were no good?

Here’s how native app survey technology outdoes the online approach:

There’s no e-mail involved – panelists who’ve downloaded the app receive a text push on their phones, alerting them to the survey opportunity.

The survey’s text and any accompanying pictures or multimedia, including video elements, will load instantly into panelists’ phones. This is what the term native means. The entire survey will be stored on each phone in its entirety – cached, in tech terminology – making it native to the device instead of relying on a connection with a Web site where the survey is actually housed.

Why does this matter? It means that panelists are at liberty to respond whenever it’s convenient, even when they’re offline. There’s nothing to impede a smooth, problem-free experience. Therefore, researchers can field surveys with 15- or 20-minute LOI without fear of drop-offs. In my experience, the drop-off rate with a native survey app is no more than 6.5 percent at 20 minutes.

When respondents are done, they tap submit whenever they’re back online – and they only need that connection for an instant. The completed survey comes back as swiftly as it went out, and researchers are able to see each complete in real-time as it comes in.

“Online surveys should be easy to do,” is the hypothesis with which Derham begins his article. That, respectfully, is where his study errs – failing to take into account new mobile technology that is superseding online research. The marriage between online surveys to mobile devices is incompatible, akin to a union between a feeble retiree and a vibrant young adult. When surveys are wedded to smartphones by using a native app, the match is made in technological heaven.

And make no mistake: smartphones are unavoidably with us. Some might think the mobile world is too much with us, to paraphrase the poet William Wordsworth, but our field will just have to get over that. Today and in the future, any information- or data-based industry or enterprise that doesn’t work on smartphones won’t work, period. Derham’s article in Quirk’s quotes a number of his respondents about their disappointment with online mobile surveys.

One more thing in Derham’s article caught my eye. He notes that the phase of his study that elicited panelists’ opinions about their survey experiences “was drawn from an online panel provider’s database and the sample and responses were from a predominantly female [72 percent] and middle-aged audience” in which 58 percent of respondents were 30 to 59 years old.

This illustrates one of the most nagging drawbacks of online market research – it skews older (and whiter), because the demographics it can now attract are those who remain more comfortable with older PC technology. It’s well-documented that Millennials, Hispanics and African-Americans favor smartphones, and increasingly are substituting them entirely for laptops or desktops. Reaching these groups is, of course, crucial for any accurate and reliable survey of the U.S. marketplace.

I urge market researchers to think about my own hypothesis: mobile research – as opposed to online – is changing everything, and this change is for the good. The technology that’s now available will take MR to new peaks of accuracy, profitability and utility. The companies and brands that rely on research will get the data they need to make timely, well-founded and effective business decisions. And our industry will reap the rewards that come with greater client satisfaction.

Thanks again to Derham and Quirk’s for highlighting this issue and pointing the way to further useful inquiry about mobile market research.


In the spirit of continued discussion, Quirk’s invited Derham to comment on St. Hilaire’s article. Derham’s response is below.

As a company, we are device and technology agnostic. We are not committed to any one method, nor to any one corporate approach, tool or app. We use whatever research method and research tool that best suits our clients’ information needs.

In the projects reported in my article for Quirk’s (with one exception), the clients chose not to set up their own customer panels and to do one-off surveys. Hence, survey invitation contact was a one-time contact. E-mail invitations were cost-effective and suitable for their budgets.

While using research panels and people with pre-loaded apps on their smartphones would be useful, continually, panel providers in Australia are unable to provide more than a handful of the customers my clients want to research. As a result, we are often restricted to the clients’ own customer databases. Customers on those databases have chosen to be customers but not chosen to be research panelists. The client/customer relationships are such that the clients have chosen not to alter that.

Internet speed in Australia is also markedly slower than in Canada and parts of the U.S., as is access, so Australian surveys are designed to cope with our own environment.

The analyses we had undertaken were based on the continual improvements in the details of survey response that the software provider has been able to make. That has influenced our capacity to analyze that response data and then to see the patterns, strengths and weaknesses in what is being done. As a result, we can and do improve, but are reliant on facts as they become available.

I appreciate the discussion and the exchange of ideas that enables our industry to strengthen the range of tools and techniques available for us all, so we can choose the most appropriate, not just the one someone has available.