Editor's note: Lee Slurzberg is president of Lee Slurzberg Research, Inc., Englewood, N.J.

After several decades in the profession of marketing research, I've come to some conclusions about the state of the art of sampling. As a profession, our ability to draw samples is far ahead of what we could do 40 or even 30 years ago. For example, back then, we drew random digit dialing (RDD) samples using tables of random numbers and a systematic sample of central offices (NNXs - the first three digits). It took a couple of coders two days to write out the telephone numbers on listing sheets for a national sample of 1,000 interviews. Today sampling companies can do that overnight. But then, we completed interviews with 50 to 60 percent of the sample of live telephone numbers drawn.

The problem, in the '90s, is the limitations of completing a reliable sample using telephone interviews of the general population and, therefore, the effect these limitations have on sampling error (the difference between the parameter estimate from the sample and the actual population parameter).

"The validity of most marketing research projects ultimately rests upon the degree to which sample-based statistics are truly representative of actual population parameters. If the sampling is biased or inadequate, it is unlikely that the research will provide marketing management with a solid basis for decision."  -from the foreword to "The Use of Sampling in Marketing Research" (American Marketing Association, 1975)

The example below reflects the typical result of conducting a national telephone study.

If 93 percent of all U.S. households have a telephone, and 70 percent of those are listed telephones, then 65 percent of all U.S. households theoretically could be reached by using a listed telephone sample, but if 56 percent of U.S. listed households have an answering machine and screen their calls, then 44 percent do not have a machine to screen out interviewers. Some persons with an answering machine will not screen all calls, or will return a call to an interviewing service.

Then only 29 percent (.44 x .65) of all U.S. listed households are available for a live contact.

If 5 percent of those contacted refuse or another 5 percent are deaf or don't speak English and the interviewer doesn't have bilingual questionnaires and bilingual ability in the appropriate language, then only 26 percent (.90 x .29) of households can be reached by an "ordinary" telephone interview.

That 26 percent does not allow for the percentage of no-answers or busy signal after two, three or x callbacks.

Ergo, if I survey a sample of 1,200, drawn from only 26 percent of the telephone listed households in the U.S., how can I say that the sampling error is only +/-5 percent (or whatever) and suggest that I am explaining the error in a survey supposedly representing a sample of all U.S. households?

Is the "non-sampling" error (or non-response error) greater than the "sampling error?"

I'm not even addressing the issue that many telephone studies not of the national variety. Perhaps answering machines are more popular in Los Angeles and New York than in Tupelo, Miss. Are unlisted households (targets of RDD samples) more likely to have an answering machine, or an answering service?
One additional point: if we choose to use random digit dialing to pick up unlisted or not-yet-listed numbers, the 26 percent may go up to 28 percent.

Where does this leave us with respect to conducting these national studies?

It certainly doesn't mean we should stop doing them. Of course, in most cases, we are better off conducting the study than guessing or intuiting the findings.

It does mean that we should be concerned about the publishing or reporting to non-research management, the statistical reliability of direct projections. Perhaps when we publish sample survey findings, we should clarify sampling error and report completion rates. We should also indicate that ere are other sources of error: respondent's inability to recall, wording of questions, interviewer error and non-response error. These survey error sources also affect the reliability of the data, but our profession doesn't have standardized measurements for them as we do for sampling error.

It does mean that our industry (AMA, AAPOR, ARF, MRA) should strive to find ways to increase completion rates.

It does mean we should demand of clients the calendar time necessary to make callbacks on different days and day parts. Callbacks on live numbers (busy signals, not at home) are usually less expensive than virgin telephone numbers.

I hope this article helps put our work in perspective.