Hybrid vehicle

Editor’s note: Nancy Bristow is senior managing director at Frank N. Magid Associates, a New York research firm.  Kenneth Yang is director of business development at RelevantView, a Westport, Conn., research firm.

We all know that once users arrive at a Web site, a poor experience can quickly drive them away. In order to realize business objectives through their Web site, companies must balance creative design, functionality and user experience. Within the market research toolkit, online usability solutions have slowly but surely gained acceptance as a viable choice. Properly applied, the quantitative and qualitative data collected can provide a wealth of specific information to ensure a maximized user experience.

In addition to its value for strict usability research, a successful application for online usability is research into Web site best practices. Typically, Web site best practices are determined by utilizing focus groups to explore several Web sites in depth, discussing their merits in isolation and as compared to each other.

Case study

As detailed below, a typical best-practice research methodology using focus groups was modified and expanded for deployment with an online usability solution. The challenge: A major U.S. insurance company needed to determine the best way to display an agent zip code locator on its site and determine precisely how it should work.

Best-practice research is typically performed as follows:

Setup

  • Four focus groups of participants with 10 to 12 individuals per group are recruited.
  • Each session is up to two hours in duration.
  • One moderator drives each session.
  • The session is conducted in a typical focus group manner where questions are open to any participant to respond. The moderator is in charge of ensuring that comments and responses are from a cross-section of the group to prevent a strong personality from dominating the conversation. The moderator is also able to elicit responses from those participants who are less likely to speak up.
  • The moderator controls a single computer and the screen is projected to the front of the room for all participants to view. When navigation through the Web sites is required, the participants direct the moderator as to the path to be taken.

Study

  • Initial questioning occurs.
  • The current client Web site is shown for specific reactions and discussion. The objective is to gauge the initial opinion of the participants about the Web site. An example line of questioning is as follows:

    - If you wanted to find an agent near you where would you click?

    - What do you expect to see after you click?

    - [Moderator navigates through the Web site based on the previous answer] Is this what you expected?

    - What do you think of how the results are displayed?

    - What would you do next?
  • Competitor Web sites are shown for specific reactions and discussion with the same line of questioning. Comparing the Web site to competitor Web sites is important in order to understand industry best practices as well as the client’s position in the landscape.
  • Non-industry Web sites are shown for specific reactions and discussion with the same line of questioning. Including non-industry Web sites is important to avoid developing ideas about changes to the client’s current site in a vacuum. There are many innovative ideas generated in other industries that can be highly effective for usability if incorporated or modified for a client’s application.
  • Lastly, the client Web site is shown to discuss how it could be improved in light of all of the other sites just seen: Where do you think the agent zip code locator should be placed? How should it be labeled? How should you be able to sort the results? Is it currently clear, or is anything confusing? What is missing?

Hybrid online methodology

In short, online usability testing allows surveying without utilizing a focus group set-up (a moderator in-person in a facility) by enabling the participants to go through a series of live Web sites to perform tasks and answer questions. As a result, the research gathers behavior through analysis of the clickstream data and opinions through the accompanying questionnaire.

Setup

  • The survey was 30 minutes in length with an appropriate cash incentive to mitigate the abandonment rate and ensure achievement of the targeted sample numbers.
  • Without geographic limitations, the sample was expanded from 40 qualitative participants to a sample size of 600 quantitative/qualitative participants.
  • We focused on the client’s top three target audiences with 200 participants in each group.
  • Recruitment of the nationwide sample was through an e-mail invitation containing a link to the survey.

Study

  • We showed the current client Web site and instructed the participants to perform tasks and then asked pointed questions after each task. Using RelevantView’s technology we were able to have the respondents perform the same tasks that a moderator would normally perform in a focus group setting. More importantly, instead of asking them where they would click, we could see their behavior through clickstream analysis.

An example task followed by opinion questions would be as follows:

- The instruction, “Please click on the agent zip code locator,” was in the left window and the live Web site was displayed in the main browser window.

When the respondent interacted with the Web site, the clickstream data was collected, capturing the respondents’ path and the time spent on each page.

- How easy was it to find?

- Is this what you expected to see?

- If not, please describe what you expected to see.

  • We showed competitor Web sites using the same tasks and follow-up questioning.
  • We showed non-insurance industry Web sites, and used the same (or similar) tasks and follow-up questioning.
  • We showed the client Web site again, and asked follow-up questions about the tasks that were performed as compared to the other Web sites.
  • Lastly, we asked questions on desirability, importance and intent to use certain features.

In the focus groups, we were limited to asking questions as to how participants would perform the tasks. Using the hybrid online approach, the modification involved asking the respondent to perform an action: click on the agent zip code locator. The task was then followed by opinion questions similar to ones posed in the focus group. The advantage with the hybrid online approach is the ability to survey a large number of respondents who are geographically dispersed in order to track clickstream data and analyze that information and the corresponding opinions.

When assigning a task to the respondent, it is key that the wording of the instructions be clear. Since there is no moderator for intervention, a poorly-worded instruction could create less reliable results.

In traditional focus groups, the moderator can gather a broad range of opinions during the course of a session. Since respondents are less likely to provide detailed answers to open-ended questions online, we use specific, directed questions (i.e., How easy was it to find the agent zip code locator? Not easy, easy, neither easy nor hard, hard, very hard). We then follow up with an open-ended question after a series of directed questions. Although you may lose some of the organic thought from discussion groups, the follow-up question provides more than enough insight of various opinions.

Benefits

The hybrid online method yielded multiple benefits. First, we gathered more comprehensive information in comparison to standard focus groups. Although qualitative information is necessary to gain an understanding of the emotional mindset, viewing the results next to quantitative data created a holistic view of the user experience. The fact that the study was online also allowed us to get a larger sample (increased from 40 to 600 respondents) that was geographically diverse.

The time savings were invaluable. We were able to cut the time for the project from start to final report by 50 percent. Without the travel time to multiple locations, we were able to focus on development and analysis, and still decrease the overall time of the research. As a result, the client was able to get actionable, robust results and react quickly to user preferences and improve the functionality that invariably affects the goodwill of the brand.

The online test also streamlined the best-practice research, requiring less time from the respondents while achieving the same objective. Instead of spending time discussing in the group where to find the agent zip code locator, we could have them perform the task and track their behavior in a fraction of the time.

Lastly, we were able to perform the entire project with a 40 percent cost savings to the client. We were able to cut out the expenses for travel to multiple locations, rental of facilities, and other costs related to each geographical location. Outside of the obvious savings, the client had additional money to act on the research results as well as funding work on additional usability issues.

Main limitation

The main limitation was the lack of personal observations. Although we could ask the respondents for their opinion, physical cues such as a raised eyebrow, quizzical expression or change in tone could not be captured. In addition, there was no opportunity to immediately explore such reactions or interesting comments. In the traditional method, an experienced moderator would be able to gain a deeper understanding.

Ideally, the client would be able to go through a progression of studies to garner the best aspects of each methodology. The current pressures to respond rapidly to changing customer preferences require cost-effective measures. In balance, the advantages of the hybrid online method outweighed traditional focus groups for our research on this client’s Web site. We were able to gather more comprehensive information, obtain a greater sample size and achieve results in a shorter time period.