Broad appeal

Editor’s note: Adam Froman is president of Delvinia Interactive, a Toronto research and marketing firm.

You are project manager for a telecommunications company’s customer satisfaction survey. To put the survey in the field, would you send an army of door-to-door interviewers across the country? Or perhaps would you arrange for home-based interviewers to call customers? Or would you simply hire a centralized, computerized phone room to do the job?

For today’s project manager, the answer is clear. In fact, it’s hard to imagine what it must have been like to conduct quantitative research before AT&T pioneered computer-assisted telephone interviewing (CATI) with a customer satisfaction survey in 1971.

I think it is fair to say that without CATI, the field of marketing research would not have the influence it does today in corporate boardrooms, political backrooms and newsrooms.

And I bet tomorrow’s market research professionals looking back to the CATI era will find it hard to imagine how the research business could have run before the Internet.

Recruiting Internet panels for text-based scripts is now fairly common and represents simply a shift of the questionnaire from the CATI operator’s computer screen to the survey participant’s computer screen - much as CATI itself shifted pencil-and-paper surveys into an automated, centralized system.

Something new

But the most exciting new Internet survey technology is just a little further out on the horizon. Broadband Internet platforms will offer something completely new: they marry the consistency of well-crafted telephone (and now, text-based Internet) scripts with many of the visual and interpersonal advantages of focus groups.

Almost anything you want the customer to see or hear - TV pilots, virtual 3-D images of products under development, radio jingles - can be tested on the broadband Internet. According to Stephen Popiel, senior vice president of Millward Brown Canada, “Broadband data collection technology makes polling the nation easier, faster and more convenient for participants and researchers alike.”

Popiel came to this conclusion after overseeing a controlled broadband research experiment funded by an applied research grant from the Department of Canadian Heritage and CANARIE Inc., a non-profit government and private partnership advancing Internet technology in Canada.

The experiment was straightforward. Nissan and Expedia.ca each supplied a television commercial which was tested using Millward Brown’s traditional LINK ad testing methodology and standard marketing research industry recruiting and data analysis techniques. Those results were compared with results from an online version of the same ad test, which used Delvinia’s AskingMedia broadband platform to turn LINK into an online tool.

As always in marketing research, the success of the offline and online ad tests depended on recruiting. One hundred people were recruited for each test. For the offline LINK test, standard telephone recruiting was used. Respondents were paid $50 to come to a central Toronto focus group facility to view and react to a TV commercial. The optimum time frame to recruit 100 respondents for an offline LINK to a central location is two weeks.

Obviously, for the broadband test we screened for high-speed Internet access. With cable and DSL penetration now nudging 50 percent in Canada, this was not an obstacle and will be even less of one as more people get online and existing users update their Internet technology (70 percent of Canadian households and 95 percent of businesses are online).

Among the project team, however, there was some concern that one qualifier could indeed prove an obstacle for the online test. While Expedia required a general sample of Canadians, for Nissan we had to screen for Canadians who intended to make an automotive purchase within a particular time frame and budget - indicators with an incidence of less than 1 percent. A total of 20,000 recruiting e-mails were sent out on nationally. How long do you think it took to find our sample. Two weeks? Two months? Hardly: in one day we had our 100 online completes.

Here’s what those respondents saw. After answering screening questions to establish that they fit our profile, a video hostess introduced the survey. (In our test, the hostess was not live and interactive, but the technology is in place to handle that requirement - cost would be the only barrier.) Respondents were streamed video of a commercial and by moving their cursor they were able to react to it in real time. They then answered questions on the product attributes and the ad itself, following a rational skip pattern.

To lower stress for the companies testing creative, our AskingMedia broadband platform streams video in such a way that it can’t be saved or replayed later, ensuring the security of the creative. The attractiveness of the creative is protected by Macromedia’s Flash technology, which reduces image drop-off and so enhances viewing pleasure.

Make it user-friendly

In addition to evaluating consistency between online and offline methodologies, it is important to ensure that the survey tool is at least as user-friendly as current industry standards. Most offline research uses a CATI approach, which puts no response burden on participants. However, Internet-based surveying has increased significantly over the past five years. Yet simple inspection of most Internet surveys suggests that little thought has gone into the ergonomics or user-based principals that would make these systems simple and easy for respondents to use. It appears instead that, aside from the use of radio buttons and the odd sliding response scale, most people simply translate a word document into an HTML survey.

One of the main goals of the broadband tool developed for this use was to build a tool that made it easy for respondents to complete a survey. At the end of the Nissan survey a series of questions probed participants about the actual survey experience.

A majority felt that this survey was better than surveys they have done in the past. Fifty-one percent felt that the survey was better and an additional 33 percent felt it was the same. Only 8 percent felt it was worse than other surveys they had completed online.

More specifically, respondents found the survey to be easy to use, easy to navigate and just under half were very satisfied with the download time. Roughly two-thirds of all respondents (60 percent) found the survey easy to complete. Moreover, three-quarters (73 percent) found the survey easy to navigate through. Less than half (48 percent) were very satisfied with the time it took for the video to download, but this is as much a function of computer hardware and limitation of the Internet at present as it is the survey tool.

Some have suggested that online surveys should be the same as traditional paper-and-pencil surveys and contain as few embellishments as possible. This implementation contained a virtual hostess who could be accessed at any time and answer questions. Response to the hostess was favorable: 58 percent had a very positive impression of the it. Most of the responses indicated that the hostess was helpful. A minority felt that it was not needed. Very few (3 percent) indicated a preference for an in-person hostess.

Respondents were also asked about their impression of the user interface (the look and feel of the hostess). A majority (52 percent) had a very positive impression of the interface and a third (33 percent) had a somewhat positive impression. Responses to the interface focused on such thing as ease of use (27 percent), design (18 percent), and the fact that it works well (12 percent). Seven percent mentioned that it was fast/quick. Nonetheless, not all feedback was positive. Some felt the screen was too small (8 percent), or download time was too slow (5 percent) or the survey was too long (4 percent), which is technically not part of the interface.

One caution

While Popiel feels that in the main, there is no reason to suspect that the online and offline samples are not comparable, he issued one caution for researchers comparing broadband ad testing results to databases of results from telephone interviews or anecdotes from focus groups: respondents to broadband surveys, like mailed ones, answer alone and so tend to score emotional factors lower than do people reacting to another person. But Popiel says that since the pattern of responses over a whole questionnaire is the same, it’s easy to adjust online scores with an algorithm for comparison with existing offline databases.

Hard costs for online ad testing with a national sample are about the same as for offline testing in one location. Soft costs are lower for both the market research firm and their client - since everything happens faster, less staff time is required to manage the project.

There is always a turning point in the adoption cycle of a new technology, when novelty attains ubiquity, when technology becomes commodity, when the discretionary morphs into the necessary. There was a point at which the toaster ceased to be a technical innovation and became an appliance.

For broadband ad testing, that turning point is fast upon us. I know of several firms conducting broadband ad tests this year. I predict that by 2010, when we say a survey is “in the field,” we’ll mean it is online.