Organize to maximize

Editor’s note: Mike Elledge is a software accessibility/usability consultant for the University of Michigan, Ann Arbor, Mich. Nancy Levy is a senior project director and moderator at Gongos and Associates, a Bloomfield, Hills, Mich., research firm.

The computer is finally sitting idle, the videographer is packing up the gear, your Web site usability interviews are completed, and now it’s time to write the report. The interviews brought out such a wealth of information - where do you begin?

As with any qualitative project, the starting point for your report begins before you sit down at the computer. A thorough debrief with your clients is critical to writing a report that will be insightful and actionable.

Before beginning Web site usability interviews, provide clients with a set of screen shots showing the screens of the Web site the user will encounter in their likely chronological order. Encourage clients to use these screen shots as the basis for their note-taking while watching the interviews. It is easy to point to or circle elements in the screen shots that are relevant to the client’s observations.

The debrief process is then simply a matter of going through these screen shots one at a time, and collecting all of the clients’ notes relevant to that page. In the process of working through the screen shots, client opinions about strategic issues, various possible fixes and the ease of implementing the various fixes will naturally be discussed. These discussions will help you understand the client’s situation, so you can make your report recommendations relevant and actionable. Using screen shots as a framework for the debrief also helps keep the conversation on track and focused. No matter how tired you are at the end of a day of interviews, take the time to do a thorough debrief.

After you’ve met with the client, take some time - you may need less than an hour - to summarize the key findings of each debrief session. Pay particular attention to problems that occurred throughout the tests, and draw some preliminary conclusions about their implications. Your goal is to capture the findings and insights from the research while they are fresh in everyone’s minds.

The report - organizing your findings

You have finished the interviews, conducted daily debrief sessions and have notes, screen shots and questionnaires piled in front of you. You also have a deadline for a topline or draft report bearing down on you. So naturally your first inclination is to jump on your computer and start typing, right? Wrong!

Ignore (for the moment!) all that pressure to get the report written. Now is the time to go back to your research objectives and protocol.

Why do you want to do that? Two reasons. The first is that you need to revisit the context for doing the research. What were the questions you wanted to answer? What were your hypotheses about the findings? Going back to the beginning will help you frame the data you have collected, and ensure that, first and foremost, the client has answers to the questions that were posed.

Second, you need to organize the data you have collected so you can present it in a thorough and coherent way. Those insights and first impressions from the debrief session are valuable, but they need to be grounded in the hard data of the research.

Putting it together

One of the most valuable, and yes, we admit, tedious, parts of the reporting process is putting all that information together. Create a spreadsheet that combines typical usability categories like navigation, labeling and content with task completion and timing (if it is part of your test) information. It is helpful to include any other data you collect from the user, such as task ratings for difficulty or written comments.

Take that category information and lay it across your spreadsheet as headings. Then, put each subject and task along the left-hand side to create a grid. Go back to your notes and plug in the information you have collected. As you fill it in, remember to categorize problems according to the usability issue. If someone has difficulty completing a task because none of the headings made sense, it is a labeling issue. If they couldn’t find the information they needed because it wasn’t on the page they expected, it is a content issue. If it took them forever to complete a task because they traipsed all over the Web site to find it, it’s a navigation problem.

Since many usability problems stem from multiple issues, this can be a little tricky. What is important, however, is not that you choose the correct label for the problem every time, but that you are consistent in how you categorize them. You can always go back and move data around if need be. Don’t get too hung up on perfection!

Once you have completed the grid, you should have a rough history of each test. You can go back to that first subject and relive their experience. Suddenly, you will find, those early notes and insights from the debrief will come alive - and be placed in their proper context.

There are several other benefits to this approach. Because you have the data before you in a spreadsheet, you are free to sort it in a variety of ways. Organize the data according to task. Is there a pattern of usability problems among the users? Sort it according to usability issue. What comes up most often? Do most of the problems in the site relate to navigation? Then maybe the client should revisit the site architecture. Are the problems related to labeling? Then the client needs to better understand the language of his customers.

This data, qualitative as it is, can also enhance your credibility with the client, and, just as important, the internal clients. You can say, for example, as we did in a recent evaluation of a proposed interface for a library catalogue system, that four of five users preferred a dropdown menu to list of choices. Or, also in the case of this evaluation, that four of five disliked the labeling for a hotlink to bibliographic information. Since we had asked them why they disliked the label, and for suggestions for label names that would make more sense, we could recommend better alternatives.

You also have a flexible document that you can give to the client, should they want to play with the data.

The report - where to begin

Web site usability studies often provide a surprising amount of information. You may find that putting the information into a spreadsheet by task and issue (navigation, labeling and content) will help you identify recurring problems, and give you insights into the Web site as a whole. It may also help you convince the client of the need for changes, as well as the validity of your observations, if you can report that “seven out of 10” users experienced the same or similar difficulty.

Another common occurrence in Web site usability testing is that larger, strategic issues are often revealed along with the more task-oriented, tactical issues the study was designed to explore.

For example, we recently studied a Web site used by a company’s employees to search an archive of past marketing research reports and tracking studies. Web site designers had structured the information by type of research, expecting that users would search based on this premise. Our usability tests revealed that respondents actually wanted to search by product. In fact, we found that the original premise on which the Web site had been built was confusing and very frustrating for users. Though the usability tests had been undertaken simply to help designers tweak some parts of the Web site and provide a “sense check” for some operations, this overall, strategic issue was clearly revealed. We have found on nearly every study we’ve conducted that such strategic issues usually do emerge.

When these issues are discovered, they should be introduced at the beginning of the report with tactical issues following. Although these strategic findings may not strictly match the objectives of the study, they are generally too important to be buried at the end.

After strategic issues have been addressed, it is important to prioritize the more tactical findings according to their impact on the strategy of the product or service being promoted. This is also (not surprisingly) the order in which the client should address these issues. Nonetheless, it is important to ensure that your findings directly answer the client’s objectives for the study so be sure to include them as well.

It is often tempting to report findings in the order in which they occur on the Web site. For example, in a recent study, respondents 1) registered at a Web site, 2) shopped for a product, then, 3) sought help if it was needed. The most important finding, however, occurred on the second or shopping page, where product descriptions misled respondents about their purchases. Although shopping followed registration in the sequence of using the site, we reported the findings on the shopping page first, as they had more impact on the site’s overall usability.

The screen shots created for client note-taking can also be used as the backbone of your report. Incorporating screenshots makes it much easier to explain the positive and negative aspects of a particular Web site area and suggest appropriate improvements. In addition to incorporating complete single-screen pictures, it is also useful to use details of screens when necessary or show multiple screens to illustrate how users move through a process.

As you report the findings, remember to focus on usability and not on programming. Avoid the temptation to describe technological or programming solutions, since that is the responsibility of the Web site designer. Moreover, don’t shy away from describing the ideal solution from the user’s standpoint - even if it appears technologically challenging. Your role is to describe how the Web site should function. Leave implementation to the Web site designers and programmers.

Include recommendations for improvements along with an explanation of the problem on a single page. Clients find it helpful to see the relevant Web page, a description of its problems (or positive aspects), and recommendations for their solution in one place. It is easier to understand the recommendations if they are shown in the context of the Web site, rather than grouped together at the end of the report.

Sum it up

After creating the body of the report, provide an executive summary for department heads and others not directly involved in the nitty-gritty of the Web site. Begin your executive summary with an overview of the key issues, describe their implications, then give the related recommendations.

Web site usability testing is an effective, insightful research method that can provide much-needed feedback from real site users. By following the above suggestions, you will provide your client with a report that is actionable, understandable and filled with information to make their Web site more valuable to its users.