Skip to: Main Content / Navigation

  • Facebook
  • Twitter
  • LinkedIn
  • Add This

By the Numbers: How to use research to measure an app's impact



Article ID:
20131005
Published:
October 2013, page 28
Author:
Jason Jacobson

Article Abstract

Your company’s app is a critical link to your consumers. Here’s a quick look at measuring its effectiveness.

Editor's note: Jason Jacobson is a UX research manager in the San Francisco office of research firm AnswerLab.

Apps represent a critical consumer touchpoint and an extension of a company’s brand identity. In addition to traditional desktop sites, mobile sites, brick-and-mortar presence, etc., apps have evolved into an important consumer interface for companies that offer search and browse capabilities, purchase functionality and transactional abilities. A study1 by Compuware revealed that consumers strongly prefer apps over mobile sites because they are considered more convenient, faster and easier to browse. And comScore reports show that the majority of mobile content is accessed via apps rather than the mobile Web.

While companies often invest a great deal of resources in app development, few quantitatively measure the customer experience after launch. App analytics on downloads, revenue and usage only tell part of the story. How do you evaluate what’s driving those numbers? What quantitative data do you have available to guide design and development decisions? How do you determine whether app changes are having the desired impact? What’s your reference point?

Deliver quantitative data

Continuously tracking the user experience on an app via a survey is critical to deliver quantitative data to help you:

  • provide a comprehensive portrait of who is using the app (demographics, customer relationship, etc.);
  • evaluate whether the app is consistently meeting user experience goals;
  • determine whether app changes improve users’ experiences;
  • compare the app user experience to competitive offerings;
  • establish which tasks app users want to accomplish and success rates;
  • quickly discover and identify areas or paths on the app that present the biggest obstacles;
  • assess the impact of the app on users’ brand impressions; and
  • integrate external data to further understand the potential impact of specific promotions, messages, etc.

Two quantitative methodologies can be used to track the app experience:

Survey surfaced within the app. The optimal scenario is to have the capability to surface a survey built into the app prior to launch. Having built-in survey capabilities results is a robust read of the app experience because app users can be surveyed in their natural context, the survey can be launched based on various criteria such as location (e.g., close to/near a store) or locations after certain transactions are completed. An added advantage of this approach is that we can use the survey as a customer relationship tool that identifies respondents with low ratings to certain questions and sends their information to a customer service rep for immediate resolution (with the user’s consent).

Survey sent to customer list/panel. If an app doesn’t have built-in survey capabilities, we can recruit respondents to take a survey about the app experience. Using a customer list or a panel, we can screen potential respondents based on app usage and other criteria and send them a link to take a survey to evaluate the app experience. This can be executed through a standard survey or longitudinally via a diary study. The downside of this approach is that it lacks the real-world context of the app usage and a reliance on recall of the experience.

The timing of app experience measurement is dependent upon a variety factors, including the popularity of the app and number of users, the frequency of use, the app’s functionality and the importance of the app to the customer-brand relationship. Apps that are highly popular, used for more frequent transactions, with a high impact, warrant a more frequent, continuous measurement, whereas apps that have a smaller user base for occasional use are good candidates for a wave research approach.

Flexible, repeatable, scalable

Tracking the app experience offers a flexible, repeatable and scalable research solution for user testing and feedback that can be implemented over the long term. Survey metrics can be mapped to business objectives and results diagnosed based on desired calls-to-action to offer insights to ensure an optimized user experience and holistic picture of the brand-consumer relationship. Brands that measure this experience are at the forefront of research, investing in a solution to ensure an optimized experience on this important touchpoint.

References

1 “Mobile apps vs. mobile websites – and the winner is? Compuware global consumer survey reveals preference for mobile apps.” http://investor.compuware.com/releasedetail.cfm?releaseid=747433

 

Comment on this article

comments powered by Disqus

Related Glossary Terms

Search for more...

Related Events

INTERNATIONAL SHOPPER INSIGHTS IN ACTION
November 3-5, 2014
IIR will hold its international shopper insights in action event on November 3-5 at the Sheraton Grand in Edinburgh Scotland.
13th ANNUAL TEXT ANALYTICS SUMMIT WEST 2014
Nov. 4-5, 2014
The 13th Annual Text Analytics Summit West conference will be held on Nov. 4-5 at the Hotel Kabuki, San Francisco.

View more Related Events...

Related Articles

There are 1812 articles in our archive related to this topic. Below are 5 selected at random and available to all users of the site.

When fake brands are used to get real data
Peter Gold reports on a research-on-research project that examined the impact on data quality of including bogus brand names in survey response lists.
Research with consumers points the way to personifying Mr. Coffee for a new advertising campaign
Mr. Coffee conducted research to develop and monitor the Mr. Coffee brand personification campaign. The company used a variety of quantitative and qualitative research methods, including focus groups, nationally-distributed questionnaires, in-house interviews and a tracking study with a national sample of over 1,000 participants.
How attrition/conditioning effects impact response bias in online panels
Drawing from a literature review, the authors argue that multi-panel membership, long tenures and hyperactive survey-taking all can affect measures of buyer behavior.
Benefit impact analysis
Conjoint analysis is incredibly useful to managers. This article outlines benefit impact analysis, a relatively simple technique for exploring product elements that produces a measure analogous to conjoint’s utility values in lieu of conjoint analysis.
How brands can compete in the reputation economy
This article addresses why sustainability is a reputational issue, how stakeholders influence reputations and why effective stakeholder communication and measurement drive desirable business outcomes.

See more articles on this topic

Related Suppliers: Research Companies from the SourceBook

Click on a category below to see firms that specialize in the following areas of research and/or industries

Specialties

Industries

Conduct a detailed search of the entire Researcher SourceBook directory

Related Discussion Topics

Hi Giovanni
10/17/2014 by Dohyun Kim
request
06/06/2014 by Monika Kunkowska
TURF excel-based simulator
04/17/2014 by Giovanni Olivieri
XLSTAT Turf
04/10/2014 by Felix Schaefer
TURF excel-based simulator
03/25/2014 by Werner Mueller

View More