Editor's note: Tim Rotolo is a UX researcher with TryMyUI, a San Mateo, Calif., usability testing firm. He can be reached at 909-240-3093 or at trotolo@trymyui.com.

How does your Web site perform among your target audience? For online sellers, this is critical knowledge for converting site visitors into paying customers. Finding out where target users become confused or frustrated or run into problems allows businesses to tweak – or totally revamp – their Web sites to suit these users’ experiences and expectations.

An equally important, if more elusive, question is: How does your site perform compared to your primary competitors? No site operates in a vacuum and to achieve the greatest success, your testing and research strategies must reflect this.

The methods for getting answers to these queries have undergone a great deal of development and sophistication in recent years. The usability industry has seen the rise of companies offering on-site usability testing, remote user testing, moderated testing, unmoderated testing and every other permutation and possibility. Many large companies have built internal user experience departments to research and craft the optimal design for their site.

Across the board, traditional methods of gathering usability information have been strictly qualitative. User testing in all its forms, plus older methods like focus groups, offer user opinions on what they like and what they don’t, what they think is confusing and what they wanted but didn’t see on the Web site. It’s a deeply subjective field.

There is certainly much to learn about your site from subjective, qualitative feedback. But zooming in on usability with just one kind of perspective is like looking through only one lens of a pair of binoculars. There is no quantitative complement to put feedback into context and to fill out the picture, nothing to give depth and texture to the one-dimensional information gleaned.

Hard to know

It’s not just that there’s a dimension missing. For the competitive-minded business, it’s hard to compare your performance to competitors without standardized, quantified data. Qualitative feedback is useful for comparing the specific features or interactions that do or do not exist on various sites, but how much, numerically, do those features matter? It’s hard to know what the components of a user experience add up to without a way to measure that experience.

Thus we have seen the emergence of a trend toward hybrid qualitative and quantitative models for understanding usability. One metric called SUS, or the system usability scale, has been increasingly adopted as a complement to qualitative Web site feedback; the “quick and dirty” 10-item questionnaire has been around for decades and now its advantages as a tech-agnostic, open-source, easy-to-implement tool are being leveraged in the user experience research community.

SUS is popular for its simplicity: it is comprised of five positive and five negative statements concerning internal consistency and ease of use, to which users respond on a five-point Likert scale ranging from “strongly agree” to “strongly disagree.” When the responses are regularized and tallied together, the resulting number is a score between 0 and 100 representing the user’s total satisfaction with the system. Then, with access to a database of other Web sites’ SUS scores, this number can easily be converted into a percentile ranking reflecting site performance relative to the broader Web community.

Other quantification tools, such as the Software Usability Measurement Inventory (SUMI), the Website Analysis and Measurement Inventory (WAMMI) and more, work in a similar way. These alternatives consist of a greater number of questions and more exclusively target online systems, in contrast to the technology-agnostic SUS. These offer a trade-off between precision and simplicity and are also not open-source.

Breaking things down

A usability metric that’s a bit different from SUS, SUMI and WAMMI is the single-ease question, or SEQ. Rather than quantifying overall system usability, the SEQ focuses on measuring usability by task; so, in a task-based user test, the user is prompted to rate from 1 to 7 how easy or difficult each task was upon its completion.

The advantage of the SEQ is that it allows the researcher to quantitatively map the user’s journey, so that spikes in difficulty stand out clearly as the user progresses through a Web site. Since each number is generated in reference to the difficulty of the tasks coming before, every user’s map shows an internally reliable chart of comparative usability at different junctures on the site, adding a valuable layer of information and depth to more general usability metrics; whereas SUS and other scoring systems enable numeric comparison of user-friendliness between sites, the SEQ allows numeric comparison of the severity of various usability problems within the same site.

The power of quantification

The cold absoluteness of numbers, especially those generated by the user and not the researcher, make it that much harder to let your own biases and blind spots skew and misrepresent the truths contained within your user feedback. With quantitative data, the researcher need not shoulder sole responsibility for determining which problems are more or less important, which ones need fixing or are simply anomalies to be written off. Such choices can be tricky, particularly because the same feature that frustrates one user may draw admiration from another. Numbers to weigh the options allow a more informed and objective decision.

Quantitative data can also be used to demonstrate to stakeholders the need to prioritize and act on usability issues – especially when that data bears on relative performance compared to major competitors. Numbers aren’t subjective like expert opinions or user attitudes; they are harder to discount, easier to act on and add urgency and convincing power. Those can be important assets when you need to sell higher-ups on the value of user research and Web site fixes.

Surge ahead of competitors

With the growth of the user experience industry, there has been an explosion in usability research methods and techniques. The next step is to fine-tune those techniques and combine them smartly to understand Web design in a holistic, multidimensional sense. As hybrid models of user research are developed that incorporate both traditional qualitative methods and newer quantitative ones, the opportunity to surge ahead of competitors in appealing to the target audience will lie with the companies who choose to use them.