If given the choice

Editor's note: John Allison is director of market research for Fidelity Employer Services Company. Chris O'Konis is manager of Web analysis for Fidelity eBusiness.

When advantages of online research are discussed, emphasis is usually placed on cost and improved turnaround time compared to more traditional research methods. The fact that people with Internet access prefer to take Web surveys is often swept aside as simply an additional consideration.

Marketing researchers, however, should take a hard look at their relationships with survey respondents. As industry-wide refusal rates continue to rise - particularly for those utilizing traditional telephone contacts - researchers need to think about creative ways to improve the survey experience for their important customers and potential customers. Too often, inflexible uni-modal research designs are employed. By incorporating the option of taking surveys online into research programs, researchers can start to rebuild trust and goodwill with respondents.

In 2001, Boston-based Fidelity Investments, in conjunction with Burke Customer Satisfaction Associates, Cincinnati, experimented with giving users of its NetBenefits Web site a choice of how to respond to a survey about its features and functionality. After being contacted on the telephone, site users were given the option of instead taking the survey over the Web. The experiment proved to be such a success that Fidelity repeated the design for a follow-up study early this year and has also employed multi-mode research successfully for other studies, including with business populations.

Many have expressed concerns about such multi-mode survey approaches - particularly with regard to different response patterns that could potentially result from an interviewer-administered method, such as phone surveying, and a self-administered method, such as using Web questionnaires. Indeed, some methodological differences did emerge when Fidelity compared results from phone and Web surveys.

Such methodological differences do not, however, invalidate the approach. By understanding these differences, it is possible to put results into the proper context and make appropriate wave-to-wave comparisons as the percentages of survey takers using the phone and Web shift.

Choosing a method

The NetBenefits Web site, an offering of the Fidelity Investments Institutional Retirement Group, lets three million participants access and manage their employee benefit account information.

When compared to competitive Web sites, Fidelity's NetBenefits offering had received high rankings from key evaluators. Until 2001, however, no proprietary research had been conducted that surveyed users from all of the Institutional Retirement Group's business units - including those serving client companies with a variety of defined contribution plans, defined benefit plans, health and welfare policies, and payroll services. Fidelity therefore commissioned a study to create benchmark measures of customer satisfaction and better understand usage patterns of recent NetBenefits site visitors.

When choosing a methodology for the study, Fidelity faced some important constraints. Because Fidelity had not focused on capturing e-mail addresses when compiling retirement customer data, it could not pursue an online-only survey format, as it would not have reached a representative sample by sending out e-mail invitations only. Agreements with some client companies also restricted surveying that could take place with their employees, making a random Web site intercept problematic. Furthermore, findings from Web site intercept approaches would probably have overemphasized views of frequent users, who might have been more likely to see the survey invitation and respond.

A matter of trust

Fidelity was, however, able to obtain phone numbers of recent users of the NetBenefits site who were eligible for surveying. A traditional telephone study certainly would have been possible. The issue of trust loomed, however. Being called at home and asked to participate in a survey about a financial relationship - even a recognized relationship with Fidelity - is daunting to many. The authenticity of survey sponsorship is much easier for a contact to ascertain on a Web questionnaire than over the phone. In the telephone case, customers are at the mercy of quick introductions (to which they are likely not paying full attention) from unknown and unsolicited callers, often at inopportune times. In the online instance, the customer sees a tangible, written computer screen clearly identifying the survey sponsor and vendor - in addition to an invitation e-mail explaining the study's purpose and how the contact was targeted for participation.

A Web-savvy population

Researchers at Fidelity thought that giving potential respondents an online survey option would give them more time to consider whether to participate and, it was hoped, reduce non-response bias that might have occurred with a telephone-only approach. It should be emphasized that the survey population - people who had accessed the NetBenefits site in the previous 90 days - was a group that had demonstrated a preference for managing information online. There was some concern that those willing to provide information over the telephone would not be representative of the total population. Offering a choice to survey contacts seemed to be the best way to expand the response base by capturing those with both phone and online response preferences.

In the field

Working with Burke Customer Satisfaction Associates to implement the survey, Fidelity fielded the benchmark study of more than 1,700 customers in April and May 2001. Potential respondents were told that the survey would take about 15 minutes, then - in the first 30 seconds of the interview - they were given the choice of completing the questionnaire by phone or on the Web. (People also were given the option of scheduling a phone survey at some other time.)

About 88 percent indicated they would take the study online. There are likely many reasons people voiced a preference for the online option. Some of these were no doubt polite refusals - customers who had no intention of completing the study at all.

Others, however, surely recognized that responding to the survey online would put them in control. The fact is that an evening phone call for survey participation is disruptive. Precious "home time" is often fully allocated to personal and family needs. Should a crying baby need attention during a telephone interview, respondents are not be able to put the phone call aside, then resume it later. With online participation, respondents can begin at times that are convenient for them and handle such interruptions.

Following through

We were, in fact, surprised by how many of those who indicated they would take the survey online actually followed through. These people were first asked to provide an e-mail address. Only 6 percent refused to do so. (Those who refused to provide an e-mail address were still given the option of participating in the study: They were given a survey Web site address, as well as a personal password. This effort elicited little response, however, as only seven completed surveys were obtained over the Web from respondents who did not provide e-mail addresses.)

People who chose the Web option and did provide an e-mail address were also given the option of writing down the survey URL and password during the phone call. They were also sent an e-mail invitation containing the same Web address and password. Slightly more than 10 percent of e-mail addresses provided turned out to be bad or recorded incorrectly. On the other hand, 54 percent of respondents who provided a good e-mail address did go to the site and complete the survey.

Although Fidelity was compelled to make initial calls because of a lack of e-mail addresses, it is unlikely that such a completion rate could have been achieved with e-mail invitations alone. We are fairly certain that the initial call introducing the study and its purpose and offering a sponsor contact number helped substantially. Absent the set-up call, online response would likely have been much lower.

A time lag

The fact that there was a time lag between the initial promise to complete the survey online and ultimate disposition of the sample record did introduce an annoying uncertainty into the process. While the fielding was taking place, it was unknown how many promises would ultimately translate into completes. This situation made it unclear if enough respondents were being recruited to meet the desired quotas. Although we turned out to have a higher than expected promise-to-completion ratio, this might not have been the case.

Because those who choose to participate do so on their own schedule, the results can trickle in. Less than 40 percent of people completing the survey on the Web did so by the end of the day after they were called. We attempted to impart a sense of urgency and sent a reminder e-mail to those who had not responded within five days. As it turned out, only 8 percent of Web responses were received more than seven days following the initial calls. (For the second wave of the study, a program was set up so that e-mail invitations were sent out immediately upon conclusion of the recruiting phone calls, leading to a somewhat faster response.)

Disarming skepticism

When respondents were given a choice of methods, the total number of completions coming via the Web turned out to be more than three times as many as the completions coming via the phone. Because a secondary objective of the benchmark study had been to determine how phone and Web responses compared to each other, Fidelity actually had to do phone interviews without giving people a choice of methods simply to meet the initial phone quota expectation. (In the second wave of the study conducted this year, all potential respondents have been given a choice of survey methods.)

While data about survey completions tell part of the story, it is harder to convey the impressions created by monitoring the actual survey calls. Several calls seemed doomed to result in refusals, terminations - or worse, complaints to Fidelity. After interviewers introduced themselves, the customers on the phone were curt, distracted, and non-responsive. It was stunning to hear how their skepticism was disarmed once they were told they would have an option of channels for response. Informing respondents of their choice as early as possible in the interview was clearly a key to the success of the project.

Data differences

Although potential differences in response patterns for both scaled and open-ended questions had been an initial concern, Burke's experience led us to believe that telephone and Web responses would be, for the most part, comparable, and that the differences that did result were more likely to be attributed to differences among the populations choosing to respond to the two different methods and not purely method effects.

The surveys included three open-ended questions. The online response to these questions was excellent - quite comparable to that achieved via the telephone with interviewer prompts and follow-ups. For one general improvement question that everyone was asked to answer at the end of surveys during the study's second wave, Web respondents typed in, on average, 168 characters and 29.4 words, while interviewers transcribed, on average, 160 characters and 29.7 words per telephone respondent.

Of course, online comments were captured as offered without an interviewer filter and the potential bias that represents. Although there is less opportunity for probing online, there is also no chance for responses to be affected by such factors as interviewers' training or style differences, typing speed, fatigue, speech patterns, or accents.

On many key demographic measures, the phone and Web respondent findings were also quite similar. The mean age was identical, and differences in mean household income, gender, and the percentage with another Fidelity account were insignificant. Web respondents did not turn out to be longer users of the NetBenefits site, nor did they tend to have accessed it more recently.

On some "technographic" issues, however, Web respondents and phone respondents did differ significantly. Web respondents were significantly more likely to access the NetBenefits site from work as well as from home, and they were more experienced with the Internet, more involved with financial portals, and more experienced with online financial transactions, such as making bill payments and brokerage trades or mutual fund transactions.

Without providing a Web survey option, it is likely that the population of more sophisticated online financial consumers would have been under-represented in the sample, possibly impacting actions taken as a result of the study. For example, in this year's second wave of the study, Web respondents were significantly less likely than phone respondents to have called a Fidelity representative in the three months prior to the survey. Without getting responses via the Web, Fidelity might have overestimated use of this information channel.

Scale items

Several attribute ratings included in the study employed a five-point anchored scale used for surveys at Fidelity Investments, where the top box represents "strongly agree," the second box "somewhat agree," the midpoint "neither agree nor disagree," the next box "somewhat disagree," and the bottom box "strongly disagree." Top two box on several key attributes did not significantly differ.

Web and phone response patterns were not, however, by any means the same. Phone respondents were more likely than Web respondents to use the top "strongly agree" response (as well as the bottom box "strongly disagree" response). Web respondents, on the other hand, were more likely to migrate away from the scale endpoints, tending toward more use of the "somewhat agree" and "neither agree nor disagree" options.

If these responses were converted to a five-point scale, it would show that the response pattern exhibited by Web respondents more closely matched a normal distribution than that of phone respondents (Figure 1). A normal distribution has both a skew (a measure of symmetry) and a kurtosis (a measure of the "thickness" of its tails) of 0. For the phone respondents, the response pattern had a skew of -1.53 and a kurtosis of 2.63. For the Web respondents, the response pattern had a skew of -0.91 and a kurtosis of 0.95.

Figure 1

These findings confirmed previous research that Burke had performed (see sidebar), suggesting that contacts respond differently to scales they can see versus ones that are read to them - especially in use of end points. In particular, there are primacy and recency effects in phone surveys when completely anchored scales are used. That is, phone respondents have a greater tendency than Web respondents to remember and repeat the first and last scale point read by the interviewer. Web respondents, on the other hand, have a visual scale to look at for each attribute, perhaps drawing them toward the center of the scale.

The more predominant "top box" use among phone respondents suggest that studies employing a telephone-only design might be overstating the individuals' degree of satisfaction. Listening to interviews, it became clear that in many cases we were forcing contacts to use a scaled response, when in fact they were making dichotomous yes/no judgments - either they were satisfied or were not. In such cases, a "strongly agree" response seems to become a proxy for a generally positive view and not necessarily an accurate representation of respondents' true feelings.

By using the common reporting standard of combining the "top two" positive and "bottom two" negative response options, we minimized the phone and Web differential and presented numbers that were truer to the essentially dichotomous contact response. Other options are also being explored for future mixed-mode surveying. Because Web and phone responses have been shown to more closely match each other when numerical scales with anchored end points are used instead of completely anchored scales, making a change to this sort of scale has been proposed within Fidelity.

Caveats

While online-only survey formats will likely cost less than traditional telephone research, it should be noted that this isn't necessarily the case with mixed phone/Web approaches. In this case, there was a cost to set up both online and CATI survey programs, as well as expenses incurred in merging and testing independent data sets.

Also, all of the studies we've done with phone/online response options have been with contacts possessing substantial Web experience and a known comfort level in working with online material. The effectiveness of adding an online response option for less Web-savvy populations is suspect.

All trends, however, point to increasing use of multi-mode survey approaches. As CATI and Web survey systems become better integrated, any incremental costs produced by a multi-mode design should be reduced. Growing familiarity with managing information online should make adding a Web option viable for more and more target populations. And, of course, with traditional telephone research facing diminishing response rates, leading to rising costs and less credible results, trying new approaches simply becomes increasingly necessary. Adding an online response option has proved to be advantageous for our marketing research efforts and will likely be beneficial for many other research professionals.

While online-only survey formats will likely cost less than traditional telephone research, it should be noted that this isn't necessarily the case with mixed phone/Web approaches. In this case, there was a cost to set up both online and CATI survey programs, as well as expenses incurred in merging and testing independent data sets.

Also, all of the studies we've done with phone/online response options have been with contacts possessing substantial Web experience and a known comfort level in working with online material. The effectiveness of adding an online response option for less Web-savvy populations is suspect.

All trends, however, point to increasing use of multi-mode survey approaches. As CATI and Web survey systems become better integrated, any incremental costs produced by a multi-mode design should be reduced. Growing familiarity with managing information online should make adding a Web option viable for more and more target populations. And, of course, with traditional telephone research facing diminishing response rates, leading to rising costs and less credible results, trying new approaches simply becomes increasingly necessary. Adding an online response option has proved to be advantageous for our marketing research efforts and will likely be beneficial for many other research professionals.