Editor’s note: Bill Ahlhauser is executive vice president of Americom Research, Inc., Wartrace, Tenn.

This article discusses broad issues in Web and other computerized interviewing. While it is necessarily written at a general level, we hope you find it helpful in your specific considerations.

One could think about issues related to Internet surveys in five broad categories:

1) projectability, 2) applicability and sophistication, 3) ease of use, 4) purpose, and 5) extension.

1. Projectability. Normally, market research must be credibly projectable to the total target population, which depends on random sampling. From traditional research methodologies, we know that no sampling is truly random. With declining participation rates in all the intercept methods, their projectability is subject to renewed scrutiny. Internet sampling needs the same kinds of adjustments between the theoretically ideal and the pragmatic, and will ultimately be proven viable by the same kinds of practical experience, emphasizing the fact that samples will demonstrably be representative based on key characteristics. The remaining differences will be resolved by weighting back to the target population, and expertise on when and how to weight will be a differentiating factor in research competition.

In addition, concern about the projectability of Web research follows the grand tradition of concern for telephone interviewing when it emerged. Telephone interviewing overcame these concerns by dint of the incredible advantages it brings to many types of surveys, and propelled by the pervasive and transforming effect the telephone was having on society. We have found that clients (end users) have so far driven the majority of Web research, not because they have facts that researchers don’t have, but because they envision the effect the Web may have on their business. So Web interviewing will catch up to societal trends.

Back on the ground, the problems look like this. Recent studies report that 26 percent of U.S. homes are on-line (out of 50 percent that have PCs), and the number is rising dramatically. It’s hard to say you won’t get a projectable sample out of that big a proportion, especially for populations capable of buying big-ticket or high-tech items.

Another issue, with permutations, is drawing the sample. When a company which has a Web site is interested in its own customers, there is no problem recruiting a sample which has on-line access from that population. Nor it is a problem to recruit customers of your client’s competitors. Among options, you can:

  • run ad banners on industry-interest Web sites;
  • run ad banners on search engines or portal Web sites;
  • send an e-mail to product-interest mailing lists (with appropriate sensitivity to spam vs. opt-in lists);
  • recruit from malls (to take the Web survey).

Originally, the issue was that no matter where (what Web site) you recruit from on-line, you are recruiting from a specialized group (the group that frequents the site you’re recruiting on). But now there are general interest sites (the portal sites like Yahoo!), other broad usage sites, and just plain more and more people visiting many of those sites. In addition, other approaches are becoming feasible, including lower cost (than mail) panel options, off-line recruiting for on-line work, etc.

In addition, the issue of projectability tends to recede the lower the incidence of the sample being sought. There’s clearly a cost factor. In the extreme, it may be less expensive to run banner ads on a specialty Web site which has lower utilization but happens to be exactly the population you’re looking for, than to run them on a popular but general-interest portal site. There’s probably an intuitive factor - the fewer of a group that exists, the more you’re going to have to orient your search to the places where they are.

For perspective, there are approximately 150 million people on-line worldwide, about half of them in the U.S.

2. Applicability and sophistication. Appropriately wording and presenting questions in self-administered interviews on the computer is its own art, and doing it on the Web is a specialized subset of this art.

A further complication arises from the fact that computerized surveys, while being more reliable in how they execute, are, for practical purposes, restricted to pre-programmed capabilities. You can work with what you find, or commission expanded capabilities.

Among the issues of applicability and sophistication of research in this environment:

  • Size (and visual dynamic) of the screen. The screen is only so big; what you want to show, and to ask, has to fit. What are the implications for handling stimuli, laying out questions, using rating scales, etc.?
  • Intensity of attention. Statements and questions are often shortened for this environment.
  • Graphical user interface (GUI). This refers specifically to clicking on what you want rather than having to type answers or codes. More generally, it refers to the researcher’s control over the background, colors, fonts, etc. Control over this set of variables was not in the researcher’s hand before. It’s both an opportunity and a burden, following this logic: The more graphical and live (life-like) the survey presentation is, the closer we should be to getting a response similar to what the respondent would give in real life. By the same token, the more applicable it is to one person’s life, the less it may be to another’s. On the Web, there are also questions of how much control you have over the placement of things on screen, what assumption you make about how the respondent has set up screen resolution (e.g., 640 x 480 or 800 x 600), whether or not you assume the respondent is willing to scroll down.
  • Structuring of question types. All computerized interviewing depends on structuring question types. What question types, with what additional options? At the simplest level, for example, if you have a multiple choice question, you still need to be able to define a mutually exclusive answer, such as "None of the above," which is not permitted with other answers. These structures constitute another level of constraint on - or opportunity for - survey design.
  • Control of stimuli. Presentation of stimuli operates at multiple levels: Can you keep it hidden until you want respondents to see it? (Yes.) Can you permit respondents to see the stimuli while they answer the questions? (Yes.) What if it’s too big to be on-screen with the question? (There are options.) Can you prevent them from seeing it? (Sometimes.) Can you time the presentation? (Yes, on a central location PC interview; no, on the Web.)
  • Stimuli constraints. From a technical perspective, graphics are simple to incorporate into Web and other multimedia surveys. However, on the Web, every graphic has to be downloaded to every respondent separately, so the more graphics, the longer the survey will take. Also, the higher the resolution of the graphic, the bigger the file that has to be downloaded. Therefore, the extent of graphics used, and the resolution quality of those graphics, must be considered in connection with a) the likelihood that respondents have fast modems, and b) the incentive for respondents to complete the survey.

One other consideration: you don’t want to use graphics that are higher-quality than the expected capabilities of the respondents’ computers. Sometimes that’s a setting issue, sometimes it’s a hardware issue.

  • Rotation. Can you rotate answers to a question? (Yes.) Can you rotate rating questions? (Yes.) Can you rotate graphical stimuli? (Yes.) Can you control multiple versions by varying the key stimulus page for different respondents? (Yes.)
  • Logic. Can you skip from one place in the survey to another based on the answer given to a single choice question? (Yes. For central location PC interviews this is no issue because they are structured with single question per page. For Web interviews, in the effort to minimize excess downloads, there are commonly multiple questions per page. In that case, the page breaks have to be structured appropriately to permit the desired skip.) Can you skip off complex logic? (Yes, within limits.) There are more subtle issues such as, on the Web, you can’t really prevent someone from backing up, which affects certain kinds of recall or ad effectiveness research. (But there are ways to deal with this issue.)
  • Nature of respondent input. Are respondents comfortable using a mouse? (In central locations, most are; on the Web, everyone is.) Are they comfortable typing? (Again, in central locations, most are; on the Web, everyone is.)
  • Hardware and multimedia. Here there’s a big difference, at this point in history, between the Web and central location PC interviewing. In a central locations, you both a) know what the hardware and multimedia capability of a system are, and b) you can fairly inexpensively ensure that you have capabilities for full motion video, seamless integration of animation with graphics, etc. On the Web, you don’t know (unless you are pursuing a panel design) what the hardware or bandwidth (modem speed) of the respondent are, and you cannot reasonably hope to do video or audio on the Web. (Again, there are options, such as sending a CD-ROM and having the Web interview interact directly with the CD-ROM.)
  • Data. The data for closed-end questions is ASCII format, fixed position, comma delimited, numerically encoded. It is written out as a single record per respondent. Normally the data file excludes data for incomplete interviews. There is an automatic datamap, header file, and other reports. It is easy to pull this data into a tabs package, spreadsheet or database. Data is available real time, and can be available on a password-protected Web site.
  • Dealing with practice or test interviews. In central location PC interviews, how do you exclude practice interviews? In Web interviews, how do you exclude test interviews? There are issues of the data itself, and issues related to calculating interview completion rates.

3. Ease of use. At the end of the day, all of this must be usable or achievable at a cost which facilitates making money. It has to be efficient. Our firm builds with graphical interfaces for the survey setup process, as well as for the respondent. Therefore, we can usually set up and execute studies at costs which increase profitability for the research company.

4. Purpose. What types of surveys are better suited to Web interviewing? As you know, nearly all of the work, time and cost on the technical side of Web surveys is in setting them up. Therefore they are more useful for projects with larger numbers of respondents, or with easy setup.

Because of the questions related to knowing who you are talking to, Web surveys are particularly good for panel-type applications.

Because both sending an invitation and getting the responses are in real time, Web surveys are good when speed is of the essence. Depending on sampling and complexity issues, a Web survey can be written, programmed and executed within a few days.

Because of the geographic spread of the Web, it is a good medium for interviewing low-incidence samples, provided there is time for either pre-recruiting or live recruiting.

Clearly, the Web is a good environment for interviewing on any high-tech product, and perfect for interviewing on Web site functionality, satisfaction, improvement, etc.

The Web is not good yet for video, audio, animation or high resolution graphics. However, in a hybrid mode, such as with a CD-ROM, the Web can do yeoman’s work on multimedia. (Currently we’re working on how we can make such a CD-ROM one-play only, for security reasons.)

The Web is not good for very complex (high programming cost) surveys to be administered to small (low ROI) samples. But even small samples can be done efficiently when the survey is efficient enough.

The Web is terrific for getting customer feedback over a long period (e.g., a year), even when there are relatively few interviews per month, because of the low cost of maintenance for a live survey.

5. Extension. There are two forces that may propel Web research into a more integrated relationship with marketing.

First, Web surveys, because they are controlled on a central server, are only a hair away from one-to-one marketing. This depends on customer consent to using research for marketing purposes and other issues, but the extension of Web research into one-to-one marketing - especially on the Web itself - is probably inevitable.

Second, Web panels, because they are relatively inexpensive to set up and maintain - due to the lower cost per transaction in this environment - will probably become prevalent for many purposes. Among the impact we expect these panels to have is a significant increase in the number of surveys relevant to day-to-day (at least week-to-week) product development decision-making. We expect the surveys to be shorter, maybe incidence levels lower, and, over the longer term, the stimuli to be richer.