Robert L. Zimmermann is senior research manager for design and analysis at Maritz Marketing Research, Inc., Minneapolis division, a company he has been with for three years. He is currently a clinical assistant professor of psychiatry at the University of Minnesota in which he is a statistical consultant to grants in the areas of addiction and eating disorders. Zimmermann has taught at the University of Winnipeg, and held research positions at George Washington University and the University of Minnesota. He holds an M.A. and Ph.D. in psychology from the University of Minnesota and published over 60 articles in psychiatric, educational and marketing research.

Personal computers are changing the way we do things by providing access to computing power to a large number of individuals who depended on specialists only a few years ago. Statistical analysis is only one of many areas in which the end-user now has direct and fluent access. As is true for both accounting and database maintenance, the use of personal computers in statistical analysis can be a two-edged sword. Virtually anybody can have direct access to a full range of statistical tools, but the potential for misusing the methods or misinterpreting the results is increased.

What are the risks entailed in interactive statistical analysis? Are statistical tests like pocket calculators, where all you have to do is enter the data properly, follow the rules, and correct output will result? Or are statistical tests more like psychological or medical evaluations, requiring an experienced professional to interpret the results in context? It may surprise people not trained in statistics that the latter is true in many instances. This is especially true with regard to the use of statistics in a decision-making process.

Perhaps the most critical use of statistics in marketing' research is an adjunct to decision-making. When a company is confronted with a decision that entails considerable financial risk, research aims to reduce that risk by providing information on the probable outcomes of the various alternatives. In the simplest case, and perhaps the most ideal, there is a discrete hypothesis, a specific sampling methodology, a single pre-defined criterion measure, and one single appropriate statistical test. Under these conditions, we can comfortably recommend the optimal decision, and usually provide some estimate of the confidence we place in that decision. We can also estimate beforehand the probability that such a methodology will provide a correct answer.

Multiple criteria

We rarely conduct marketing research in such a manner. Often there are multiple criteria. Almost always supporting information is collected, and additional analyses performed. We do not trust our criteria, the decisions seem too complex to leave to a simple go/no go statistical criteria, interviews are too expensive to let go without milking them dry of any possibly relevant information.

Whenever we deviate from the elegance and simplicity of the model described above, we run the risk of inadvertently introducing spurious results. Most people are at least familiar with the impact of multiple tests in hypothesis testing. If you will accept an hypothesis as true if any one of several tests is statistically significant, you must take into account that you have increased the probability that a defined difference will occur due to sampling error.

It is obvious, but less generally considered, that this applies to the totality of analyses performed, and not simply to the multivariate set involved in one specific analysis. It also applies to both explicit and implicit analyses. If you decide on the basis of an initial perusal of the data that it is more profitable to focus your analyses on certain aspects, then you have implicitly and probably inadvertently performed something analogous to a statistical analysis. Your decision capitalizes on chance deviations and alters the validity of any subsequent statistical tests.

Sequence of analyses

It cannot be too strongly emphasized that the sequence of analyses can markedly affect the probability levels of the final analysis. If you enter into a regression analysis only those variables that look meaningful, you have implicitly performed two sequential regression analyses, one implicit and one explicit. You may have markedly leveraged your potential sampling errors. Statistical probabilities, when applied either to an hypothesis testing or a decision-making context, presume an explicit and very specific sequence of events from data collection to final analysis, and any deviations from that sequence distort the values reported.

Let us look at the way statistics might be done on a personal computer, and what impact this might have on the validity of the statistics.

First, use is interactive, which generally means that it is sequential and undocumented. Second, it is typically done by the marketers or other end-users who are specialists in the substantive areas being studied, rather than in the statistical methods employed.

By undocumented, I simply mean that you only print out the final results, and not all the intermediate steps. The screen is instantaneous, silent and often in full color. The printer is slow, noisy, and does a lousy job with graphics. So, you try this and try that, and an hour or a day later you have a very impressive table or set of simulations, without any explicit record of all of the steps you went through, of the tables rejected, of the combinations you tried that just did not quite work out.

Successive approximations

These analyses are sequential. Whether you are doing a series of simulations or exploring the permutations on a set of tables, you precede by sets of successive approximations. The nature of interactive analysis is that each procedure provides output that influences the way you set up the next analysis. Following each analysis, you make a decision which is explicitly formulated in the set up of the next analysis. These successive decisions are based on a complex interaction between the results of the previous tests and some model or set of expectations that the analyst has.

The model may be something as neutral as "I like data with large differences," or "data that coheres," or it may be something more malignant such as, "the company has invested $15,000,000 in product A and it had better succeed" or "my job is on the line if I cannot show X." The model functions as a criterion which selects or rejects results of intermediate analyses, combines categories and groups data, constructs derived variables, and sets up successive analyses. This process inevitably capitalizes on any tendencies in the data which enhance or conform to the model, and thus there is a selective capitalization on chance tendencies.

This mode of analysis can be contrasted with the analytic methods typically employed in the marketing research department of a major corporation or a full-service firm. The analysis is usually supervised by a professional who is both statistically sophisticated and relatively unbiased as to outcome. A standard sequence of analyses is performed, or the exceptions are documented. The full set of procedures from data collection to written report can be explicitly documented and reproduced if necessary. These are the minimal conditions required to produce results upon which critical decisions can be based. At the least, within these parameters, the results can be discussed and independently evaluated.

Basic guidelines

It is certainly possible to explore data interactively in a criterion-neutral manner. However, the very people most apt to use interactive analytic tools on a personal computer are also those most apt to be biased as well as to have the least statistical sophistication. While there are "correct procedures" that a statistician might recommend, their use would by and large contravene the major conveniences of interactive processing.

The following are some basic guidelines specifically appropriate for interactive ex post facto analyses:

  1. Remember that all significance levels and stated probabilities are apt to be very misleading.
  2. Document all your procedures, including your reasons for doing an analysis in a particular manner. Especially note intermediate analyses and analyses you do not include in the final report. At least acknowledge unreported analyses in the final report.
  3. Use of regrouping, derived variables, selecting of subsets of variables, etc., should be consistent throughout your analyses, substantially justifiable, and used to simplify and summarize rather than to enhance significance levels.
  4. Never base major strategic decisions on interactive ex post facto analyses. Justify them solely on the basis of analyses for which probabilities and risks can be meaningfully assessed.
  5. Challenge your conclusions. See if another interpretation is viable. Better yet, have an independent review performed by someone with neutral or opposing biases.

It would be difficult to distort a study which produced clear positive results, that is, highly statistically significant differences. But in such cases you probably do not need a statistical analysis to make a decision. It is with the most difficult decisions that bias is most apt to occur. These latter are also the studies which will most likely involve the most extensive interactive data review.

Opening up options

Research may be used to open up options, such as where a perceptual mapping study is used to suggest potential new products, or to close off options, where only one of two or more mutually exclusive options must be selected. For the decisions which are directed toward opening up options, especially for those with minimal financial risk, interactive data exploration can be a highly useful and creative activity, providing those with the most direct substantive knowledge of the problem unmediated access to the "real" data.

On the other hand, decisions which close options, especially where there are severe financial penalties for choosing the wrong option, should be based only on analyses for which explicit risks can be stated.