Conducting comprehensive end-user testing 

Editor’s note: Alfonso de la Nuez is co-founder and chief visionary officer at UserZoom and is the author of The Digital Experience Company: Winning in the Digital Economy with Experience Insights. 

Online shopping and the influential impact of social media have made customers’ experiences crucial to companies’ reputations. How people feel about products and services, and how easy or difficult they find navigating an app or website, are often reflected in online reviews and ratings.

But how telling or accurate are those comments and scores? Sometimes, questions posed on customer reviews don’t go into enough detail or prompt in-depth responses. And ratings –usually in a one- to five-star format – often aren’t well defined. They don’t fully reflect the customer’s online shopping experience or their complete thoughts about a product or service. Companies are left with insufficient or misleading feedback, making it difficult to determine where they may be lacking – knowledge that would help them make improvements and win more customers. 

When the reviews are positive but too general, companies can be lulled into thinking that most of their customers are happy, and that their digital performance and products don’t need adjustments. Being stagnant in today’s highly competitive and ever-evolving marketplace, even when business is riding high, is a recipe for falling behind. One study showed that upwards of 80% of online reviews rate items with four or five stars. And the Wall Street Journal reported that customers are automatically giving out five-star ratings, particularly on Uber and Lyft. They've become more like participation trophies than actual feedback. Even if a product or service is excellent, it isn’t perfect, and a bunch of five-star ratings can make consumers skeptical of rating validity. 

I've long believed that reviews, while somewhat valid if managed well, are vague. If you really want to measure the quality of a product or service experience, you need to conduct end-user testing and collect feedback on all facets of the product, services, digital experience and customer behavior.   

Designing the digital experience that you need

When a potential customer encounters your website or app, you usually want them to be able to do a variety of things: search for your products, learn about those products, compare your products to others in the industry, make an informed decision about which products they want, buy the products, share their experience with people they know and ultimately continue using the products because they’re having a great experience with them. 

That experience can include many different activities at different times: interacting with the products, customizing them, exploring their features, performing tasks, sharing them and so on. Making this interactivity happen on the limited real estate of a computer screen, phone or tablet, in an engaging and relatively easy way, is incredibly complicated.

Gathering user behavior data is relatively easy for a website, mobile app or desktop software with telemetry built in. But these metrics don’t tell you if the digital experience is easy, or if the experience is valuable/useful. That’s where user experience research helps fill the need, both quantitatively and now qualitatively. And if you don’t acknowledge how important – and difficult – it is to design a great digital user experience (UX), you’ll never make the investment necessary to get it right. 

Many companies go into this effort with no sense of what it’s going to require, and are blindsided by the difficulty, the expense and the extent of the investment in design. Companies need to know how to organize so that they can be effective and efficient in the way they build these digital experiences.  

It’s important to hire a design leader and provide them with a seat at the table, at the same level as the engineering leader, and let them assume responsibility over the task of front-end interaction and visual design. The design leader is the person who oversees making the UX engaging, easy, effective and efficient. 

What do end users really care about?

Being digital means being customer centric. People want a product that makes their tasks and lives easier. 

The experience gap is found when a company believes it is delivering a great digital and product experience, but the end users doesn’t feel the same way. The end user doesn’t care about 99% of what’s going on when you design and build a website or app.  

Users care about the quality of the experience, not so much the brand or company. Even if you’re a well-known company, they’ll cut you off in a minute if you’re not meeting their needs. They will go with convenience, simplicity and speed. They’re busy, and they don’t want to read a user manual or call customer service. Companies that are investing in a high-quality digital interface and experience are winning the loyalty battle.

Companies should invest in user research and testing and prioritize the use of those insights to drive positive change in their digital and product development processes. It’s important to note that UX research is not the same as market research. Market research has a broader scope. UX research is all about making something people want. Bring UX research into the process as early as possible. 

UX research also focuses on the qualitative aspects of the product experience. You’re going to have conversations with the end user. You’re going to show them a prototype early in the design process and have them interact with it. And you’re going to go through multiple iterations of this process until you get it right, until you have something that’s much more likely to produce a good return on your investment in your digital presence. Effective UX research should combine what people say and do and should be done across both small and large sample sizes to make more strategic decisions.

Understanding the value of user research data  

Traffic analytics systems only provide a partial view of digital business, giving data related to what has occurred, such as pages visited or abandoned shopping carts, but not the reasons why it has occurred. To collect the why, you need a UX approach, methodology, knowledge and tools.  

Cold data is user activity gathered via standard analytics, including the number of users who visit pages, how long they stay on those pages and on the website, where they drop out, etc. Cold data must be complemented by what I call warm data, which is experiential data that delivers the why and how of user activity. The combination of cold and warm data provides the actionable insights needed to manage digital property. 

Let’s look at an example of a shopping site where people are searching for products. The cold data shows that the product pages are popular. You know how many people were on those pages, how much time they spent there and what they bought. What you don’t know is why they were there, why they did what they did and how they felt about the experience. You need to conduct UX research to understand the why and how. 

User research observes users interacting with your product. It allows you to directly ask them for their feedback – why they’re spending so much time on given pages and what they’re looking for and if they’ve enjoyed the shopping experience. You’ll discover if: 

  • They’re looking at product pictures, product descriptions or both.
  • They feel that the pictures and product descriptions are good enough to help them decide to buy.
  • They care about what other people have to say about your products via reviews.
  • They want to be able to click the product pictures to make them bigger. 

This kind of task-based behavioral and attitudinal information leads to actionable insights that most cold data doesn’t deliver. 

Many companies today understand this, and so they’re gathering a combination of cold and warm data – a multi- or mixed-method approach to research and analytics. However, a lot of companies are not there yet. They may have a sophisticated analytics strategy to understand what is happening, but they don’t have a mature UX strategy that will tell them why and how things are happening. Analytics is passive – a one-way street of data gathering. User research is a dialogue between you and your customers or potential customers.

The focus of UX research and usability testing

The end user must be the center of the process for everyone on the product development team – and this requires research. UX research and usability testing are both focused on the quality of the experience users are having, whether they’re able to easily do specific tasks (effectiveness or success ratio) and how they go about achieving those tasks (efficiency ratio). You ask users to complete a task, observe how they do it and optionally ask questions about why they did it the way they did and if they found it easy, convenient (and even fun when it applies) or confusing and tedious. 

Through this process, you learn the how and the why of user interaction with your product, as well as how users feel about the experience. This is why it’s very important to test early and often – every week or every other week – as you move forward in your design and development process.

Automation can help you to collect, store and control user data, as well as share it across teams for faster and more cost-effective UX development. 

Here are examples of metrics you can derive from this kind of UX testing:

Behavior. It’s critical to understand what people are doing and how they are using your products. Task-based usability testing is a standard method to gather this information across the industry. Typical metrics you can capture include these task-level behavioral measurements:

  • Pageviews – May include clicks, taps, number of screens and steps. 
  • Problems and frustrations – Number of unique problems identified and/or percentage of participants who encounter a certain problem.
  • Task success – Percentage of users, given a set of realistic tasks with a clear definition of task success, who succeed at the task. 
  • Task time – Time spent to complete a given task.
  • Abandonment rate – The ratio of the number of abandoned shopping carts to the number of initiated transactions.
  • Average order value – Total revenue divided by the number of checkouts.
  • Conversions – Number of sales divided by the number of visits; average order value (total revenue divided by the number of checkouts). 

Attitude. These metrics capture: how users feel; what they say before, during or after using a product; and how this affects brand perception. To measure this, you might want to capture these attitudinal metrics:

  • Net promoter score – Measures loyalty based on one direct question: How likely is it that you would recommend this company/product/service/experience to a friend or colleague?
  • Standardized UX percentile rank questionnaire – Eight-item questionnaire for measuring the quality of the website UX, providing measures of usability, credibility, loyalty and appearance.
  • System usability scale – Score derived from a short questionnaire that ascribes a quantitative value to qualitative opinions.
  • Task performance indicator – User is presented with a task question, and once they have completed the task, they answer the question and then indicate how confident they are in their answer. 

Do you understand what a great digital presence is all about? Are you acting on the development of that digital presence? Do you have the right resources? Are you making the necessary investment to make it happen? Saying that you want to offer great digital experiences is one thing, but to remain competitive and growing as an organization, you must act on that intention.Â