A healthy site

Editor's note: Debra Power is co-owner of Moore Power Marketing, an Ann Arbor, Mich., research firm. Mack T. Ruffin is associate professor, Department of Family Medicine, University of Michigan. Michael D. Fetters is assistant professor, Department of Family Medicine, University of Michigan. The authors wish to thank the HealthMedia Research Laboratory at the University of Michigan for reproduction of sample Web pages.

Usability testing is one of the most valuable tools for assessing the effectiveness of a Web site. However, many unique challenges arise when testing the usability and content of a health-related Web site. During the course of a recent project, we worked to develop a methodology that provides both qualitative and quantitative data. Using this methodology we were able to measure the effectiveness of the content, navigation, and usability of the Web site. While these methods may not be applicable for all health care Web site testing, we believe the underlying principles could be applied in a wide range of circumstances.

Attitudes and perceptions

In May 2001, Moore Power Marketing completed a series of 10 focus groups for the University of Michigan Medical Center. The groups centered on attitudes and perceptions towards colorectal cancer, with a secondary goal of gathering information for the development of a Web site about colorectal cancer (see "Rewriting the rules," Quirk's, December 2001). The focus groups were Phase 1 of a three-part project. Phase 2, which we will discuss here, consisted of 30 in-depth interviews about the Web site, while Phase 3 will evaluate the effectiveness of the Web site as an intervention tool for users.

In order to maintain consistency, the segmentation of the in-depth interviews closely matched that of the focus groups. Interviews were conducted in urban, semi-urban, and semi-rural areas, with mixed genders and ethnicities. Further, participants were required to be between the ages of 50-70 and unscreened for colorectal cancer. Each of these requirements helped us formulate, and in some ways dictated, the final procedures for the in-depth interviews.

Half of those recruited for this phase of the study had participated in the focus groups. This was done in order to measure the effectiveness of the implementation (the Web site) of the data gathered during the focus groups. The other half of the recruits provided fresh insight without having learned significant background information about the disease during the focus groups.

During the in-depth interviews, participants reviewed a Web site focused on the available tests for colorectal cancer and evaluated the usability, content, and navigability of the site. The goal of the site was to educate the user and provide a starting point from which they could begin further inquiry with their physician. The interviewer looked for detailed reactions to particular aspects of the Web site, recorded responses, and video/audiotaped the interviews.

Screening for Internet use

To conduct Web site usability testing the participants must be able to use the Internet. While this may seem obvious, measuring a potential participant's competency in this area can be difficult. The audience for this Web site is adults ages 50-70, those most at risk for colorectal cancer. According to an U.S. Census Bureau Current Population Study (August 2000) older adults have the lowest percentage of Internet use. In their study, 31 percent of adults 55-64 used the Internet at home, while those 65 or older were at 13 percent. We were concerned that in order to determine the effectiveness of the Web site we should recruit potential users who used the Internet frequently, as well as the less frequent user.

To recruit Internet users we included a question in the screener to gauge relative Internet usage competency. It read: "During the interview you will be asked to use a computer to review a Web site. Are you comfortable using a computer, or having the interviewer help you use one?" Several initial test interviews were conducted and it became evident that many of the recruited participants did not have enough experience using the Internet to provide useful feedback on the Web site. The participant needed to have at least a rudimentary understanding of how to use a Web site in order to appropriately answer questions during the interview.

Clearly, a respondent's comfort level with the Internet varies, and the population we were working with had varying degrees of competency. Some participants remarked that the question we posed was regarding computer use, not Internet use in specific. Still others felt that their ability to "point and click" was enough, even if they had never used the Internet. We revised the question to force participants to qualify themselves based upon their response. It read: "How comfortable are you using the Internet to access information about a variety of topics?" Only those individuals who stated that they were very comfortable (or a similar response) were taken through the rest of the screener. Still, several recruited participants had less expertise in using the Internet than expected. The key to evaluating their experience with the Web site as a potential user was an unconventional mingling of qualitative and quantitative methods.

Integrating measurements

While in-depth interviews tend to be a purely qualitative process, integrating quantitative measurements can also accomplished relatively easily. For this project, the budget did not allow for eye-tracking, so a series of self-administered tools were developed. Participants completed a questionnaire about colorectal cancer via paper/pencil. Before the interview began the interviewer asked a series of questions about the participant's experience using the Internet, whether they searched for health information online, etc. Each participant also filled out an online questionnaire and the responses were later used to customize aspects of the Web site. Once the interview was complete, another online questionnaire was administered. To determine the efficacy of the Web site's information delivery, several questions were repeated before and after viewing the Web site. Finally, at the end of the interview a paper/pencil document gave participants an opportunity to further voice their opinion on the Web site.

The Web site testing was conducted with two laptops, one being the server and the other the client. By using this configuration we were able to immediately export the data from the online questions into a spreadsheet for analysis. Responses from the other questionnaires were also added later for comparison. By integrating these quantitative components into the project we were more accurately able to determine if the Web site was achieving our goals.

During the interview, qualitative measurements were also made including tracking how participants reacted to particular aspects of the Web site and the navigation choices they made. Developing an interviewer guide was a crucial aspect of the project, and our format allowed us to address two major areas of the Web site: usability and content. A series of 48 questions was created and each addressed usability or content of a particular section of the site. Subfactors to usability and content were also assigned to each question to assure that all aspects of the site were appraised. These included:

Usability

  • navigation
  • timing/timeliness
  • clarity of graphics/text
  • user focus
  • inclusive/comprehensive

Content

  • comprehension
  • timiutility
  • appealing

The interviewing method itself was also a blending of qualitative and quantitative methods. Participants were encouraged to speak aloud as they negotiated the Web site and verbalize why they made particular choices and how they felt about each page. We decided that there were three potential types of interviews: structured, unstructured, or semi-structured. Based upon the relative abilities or comfort level of each user (determined once they reviewed a page or two of the Web site) the interview type was chosen.

Structured interviews were rigid in nature; the interviewer went through the Web site page by page with the participant, asking questions from the guide. This worked best with users who were less comfortable with the Internet.

Unstructured interviews were conducted with participants who were experienced Internet users. They were instructed to use the Web site until they had visited all areas to their personal satisfaction. If a major area of the site was missed, they were asked to review it at that point. Throughout the interview, questions were asked directly from the interview guide, but not necessarily every question was addressed.

The semi-structured interview was used most often and also brought fruitful results. During these sessions, participants went through the Web site at their own pace as the interviewer asked questions from the guide. As similar themes arose, a blending of the semi- and unstructured interviews allowed the interviewer to delve deeper into a particular topic, or break new ground.

Adding, deleting, or modifying the interviewer guide is just as important with usability testing as it is in a focus group setting. Throughout the session participants were asked why they chose to click on a particular word, if they liked color choices, what they expected to have happen after they clicked, and whether the text was easy to understand. A degree of flexibility, with an additional structured mechanism in place, can provide a great deal of insight into the usability of a Web site. The interviewer guide also served as the primary tool for field notes. We inserted large blank spaces in the guide immediately after each question, thus leaving space for participant responses and the interviewer's observations.

Although we videotaped and audiotaped the interviews, observation of participants' action, interactive questioning, and lengthy field notes were more fruitful sources of information.

Usability evaluation

Three dramatically different versions of the Web site were created: the prototype used for the first six preliminary tests; a revised version after 10 interviews; and another site once the project was complete and data collected. An examination of the first two versions with regard to navigation, usability, and content illustrates how usability testing can provide insight specific to health care-related Web sites.

During preliminary testing the navigation of the Web site proved to be universally difficult for participants. When searching for information about health care, participants were searching for a concise method for moving through the Web site and finding information. The first version of the site used a series of tabs to convey each section and sub-section of the Web site. Within each tab there were sub-sections, but users did not see the sub-sections as they focused only on the larger tabs. To correct this in the revision, the sub-sections were color-coded to the larger tab, making a subtle but identifiable connection.

In this case, traveling through the Web site in a particular order was also important. When discussing colorectal cancer, users needed a certain degree of background information in order to decide which test for the disease they would be most comfortable choosing. Our assumption was that users would travel through the site in a linear fashion through each tab, thus gathering important information before reaching the "Choosing a Test" tab. During usability testing we found that while some participants followed the tabs successively, other skipped around. In order to combat this we added numbers to each large tab encouraging the user to go through each section step by step. (See Figures 1a and 1b.)

Figure 1a


Figure 1b

As with any site, usability of a health care Web site is vital, and means the difference between a user clicking through the site or bypassing it out of frustration. The central focus of our Web site was a discussion of the four types of tests for colorectal cancer. For the first version of the site we wanted to give background information about the processes for choosing each test, and discuss some misconceptions about colorectal cancer testing (Figure 2a). Users told us that our page was confusing, had too much information, and their next step was unclear. For the revision (Figure 2b) we took out all introductory material and made the names of the four tests more prominent. Reducing the required number of click-throughs allowed our participants to find the information they needed faster. Users - especially in this age group - are looking for solutions to their health care concerns. They appreciated factual data and were eager to get to portions of the Web site that focused on how to bring to bear what they were learning with regards to their personal health needs.

Figure 2a


Figure 2b

Content evaluation

Once the navigation and usability of a Web site are in order, the content should be considered. For a health care Web site, adding a glossary and pronunciation guide is a good way to give the user definitions of difficult words. We made our glossary a prominent feature of the site, and potentially problematic terms were added to it. Using abbreviations for long terms may be a space saver, but it proved to be a barrier for many users. Our Web site users also appreciated the use of photos and diagrams to illustrate the text of the Web site. In general, the Web site needed to contain clear language without the use of complex technical terms. Finally, it is important not to underestimate the amount of information needed by the user. We found that once participants became familiar with preliminary information about the disease they were eager to learn more. Adding statistics, links to other health Web sites, and reference materials worked to enhance the user's experience.

Overall, the 30 interviews we conducted yielded a mixed bag of responses. More experienced users were quick to point out both content and usability features the Web site was lacking. They were also more likely to make suggestions for altering the site to meet their needs. Less experienced Internet users needed more prompting and the interviewer needed to ask probing questions about how they used the site and the navigation choices they made. Those who had participated in the earlier focus groups were more likely to comment on aspects of the site that were suggested during the focus groups. They were quick to point out whether or not the Web site was as they had envisioned it. Many of the non-focus group participants had no or very limited exposure to colorectal cancer. Their comments concentrated on content more often than not, and usability issues were tied to accessing further information.

There were also some differences between men and women's responses to interviewer questions. Women requested more visual information from the Web site. Unprompted responses included requests for additional photos and diagrams of cancer cells, screening instruments, and polyps. On the other hand, when men were asked about inclusion of these same visuals they responded either negatively, or that they would like an "opt-in" option to see these visuals. Men also requested statistical information more often than women.

Refine and benchmark

Web site usability testing can be a valuable tool when conducting health care research. The ability to gather input from a potential user allows you to refine a Web site until it meets your specific goals. Consider integrating qualitative and quantitative methods in order to benchmark the revisions you make to the site throughout the project. Also, creating multiple versions of the Web site allows for different approaches during the interviewing phase. Encouraging the participant to provide input at their comfort level during the actual interview will also yield better results. Finally, be sure to analyze both the usability and content of the Web site. Both of these components should work together to formulate a successful site.