Editor’s note: Kevin Gray is president of Cannon Gray LLC, a marketing science and analytics consultancy.

Although marketing research may not be "hard" science, as researchers it is our professional obligation to strive for scientific rigor to the best of our abilities within the constraints under which we work. Some methodological fine points are likely to have little or no impact on the client's decisions. Other kinds of details may seem trivial or even geeky at first glance but are actually consequential and marketing researchers must be wary of conflating the two.

I would like to offer some thoughts on what I call research thinking. Written from my perspective as a quantitative researcher, I believe this article will have relevance to most kinds of marketing research, including qualitative and big data analytics. Marketing research methodologies have been adapted from disparate fields and pieced together by practitioners over the course of several generations. There surely are good and bad practices but no absolute best practice, despite occasional assertions to the contrary. 

Verifying data quality

Errors of many kinds can sneak into our data and therefore, data checking, cleaning and other "janitorial work" should never be skipped. We should be reasonably confident the data contain no serious errors and actually mean what we think they mean. Consumers do not necessarily interpret survey questions in exactly the same way as the marketing researchers who write them do. International research is more prone to misunderstandings, not only due to translation errors but because some concepts don't travel well across cultures and cannot be communicated precisely. This is particularly true when questions pertain to values, attitudes and lifestyles. Unfortunately the frantic pace of today’s business world often causes flaws in questionnaire design to be detected only after fieldwork has been completed.

We also need to exercise care when interpreting customer records and other big data, which can be messy and confusing. Even building a traditional data warehouse is rarely a simple task, in part because the various parts of an organization have diverse requirements. Data definitions are often ambiguous and it's not uncommon to discover two or more data fields that are almost but not exactly the same, and we must decide which to use. These janitorial tasks may seem like unappealing grunt work and not part of your formal job description but are really important steps to ensuring your data are error free.

Correlation versus causation

Scientists need to be careful about inferring cause-and-effect relationships. In most marketing research we are making causal links but often are not consciously aware that we are doing so.

Patterns we spot may result from any number of factors, including those we are unable to measure and those we are unaware of. Consumer groups are often non-equivalent in important ways before we compare them and, since differences between consumers have not been "randomized away," conclusions about causation are usually more problematic than in experimental research. Instead, we must make our causal deductions based upon associations, though this entails risks. "Correlation does not imply causation" is a warning drilled into future statisticians in the classroom and often cited in the business media these days.

Defining relationships

Associations can also be spurious. For example, if a correlation between sales of ice cream and sales of sunscreen were found it probably would be the result of weather and seasonal marketing activities, since it is improbable that the sales of one caused sales of the other. There are also interactions in which the relationship between two variables is mitigated by other variables. For example, the relationship between age and product evaluations may depend to some degree on gender, and vice versa. Another kind of relationship is reciprocal causation, whereby one variable influences a second and that second variable, in turn, affects the first variable. A case in point is when raising awareness of a brand increases purchase of it, which leads to greater awareness of the brand since people are more apt to recall brands they often use than those they use infrequently.

There are still other trick pitches data can throw at us. For example, correlations between two variables can actually be masked by other variables and appear to be small unless statistical adjustments are made that remove noise from the relationship. Also, curvilinear relationships among variables will be obscured by the use of the standard correlation coefficient and two variables may appear unrelated when in fact they are strongly associated, though just not in a straight-line fashion.

When data are collected over time – for example weekly sales and marketing data – causal relationships can sometimes be easier to unravel since cause must logically precede effect. For example, some marketing activities are not intended to have an immediate impact but are correlated with sales in later periods. In our more typical cross-sectional research, however, the data have been collected during one period and a time dimension is lacking. 

With either cross-sectional or time-series data, multivariate analysis can help untangle causal relationships by statistically accounting for potential confounders but this is rarely easy and different statistical methods and models can give us very different readings. While often exceedingly useful, it should not be conducted mechanically or on the fly.

Causation requires correlation of some kind but correlation and causation are not the same.

Understanding (and avoiding) data interpretation traps

There are other ways in which we can be led astray when interpreting data. A very serious example would be when the consumers who have completed our survey are atypical and their opinions dissimilar to those of our target population. In truth, it is nearly impossible to obtain a sample of completed interviews that is perfectly representative of our population of interest. This does not imply, however, that all surveys are more or less the same. Representativeness is a continuum and at some point the lack of representativeness will begin to shape decisions.

Though this should be Marketing Research 101, the difference between focus groups and survey research is much more than sample size. Even if we were to assume that representativeness is no more of a concern with focus groups than with surveys, respondent interactions, group dynamics and the data make the two methodologies fundamentally different. Text analytics software cannot make them the same.

Regression to the mean is a statistical phenomenon that is not intuitive for most of us and I will defer to Wikipedia's concise definition of it: "... if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement – and, paradoxically, if it is extreme on its second measurement, it will tend to have been closer to the average on its first." This has bearing on marketing research and our classifying of consumers as heavy, medium or light purchasers is one example of where this comes into play. Independent of our marketing efforts, some of the consumers we have put into our heavy bucket if checked on a later occasion would have lower purchase frequency and some light purchasers, conversely, would have higher purchase frequency.

Statistical significance testing

Data dredging can be hazardous to our professional health as it's not hard to find an interesting pattern and assume it is real when it is actually just a chance result. Statistical significance testing, if used advisedly, can be helpful in screening out fluke results but there are risks in over-relying on it. Significance testing assumes probability sampling and measurement without error, two assumptions that are usually not met in the real world of marketing research. Apart from that, we should not let significance testing do our thinking for us and should instead first ask ourselves if a difference or correlation we have found is large enough to have practical significance. If not, it does not matter whether it is statistically significant. Moreover, I also have found that patterns of results are more enlightening and trustworthy than examining masses of significance tests.

Big data

Thus far, big data may not have made life easier for marketers. David Hand, a former president of the Royal Statistical Society, describes what he calls the law of truly large numbers saying, "with a large enough number of opportunities, any outrageous thing is likely to happen." With a gigantic number of customer records and variables, for example, significance testing is seldom helpful in flagging chance results and, in any event, the more we look the more we will find ... though what we find might be fleeting. The signal-to-noise ratio can be very small in big data.

Related to this is "harking" or hypothesizing after the results are known – something of which we need to be mindful. An example is the imaginative use of crosstabulations in marketing research. The ways in which we define consumer subgroups are often fairly arbitrary, even with something as basic as age group. It is not sound or ethical practice, however, to redefine these groupings after having looked at the data in order to find something to please or impress the client.

Models and reality

In analytics there often is a trade-off between explaining and predicting. Many models provide us with a good understanding of why patterns occur – for instance, why certain segments of consumers buy certain brands for certain occasions – but some of them are so complex that they don't predict the behavior of a new sample of consumers that well. Conversely, some algorithms predict well but are not intuitive and cannot be easily explained in non-mathematical language. This quandary is not present in all research but is a frequent challenge modelers must face.

Models are not reality, only simplified representations of reality. In The Grand Design, Stephen Hawking and Leonard Mlodinow outline what they term "model-dependent realism" and conclude that if two physical theories or models accurately predict the same events "one cannot be said to be more real than the other; rather, we are free to use whichever model is most convenient." Most statisticians I know would not find this a controversial statement. It is not at all unusual for two or more models to provide an equivalent fit to the data but suggest very different interpretations and implications for decision-makers. Settling on which model to use, or whether to go back to the drawing board, should not be made solely on the basis of criteria such as the BIC or cross-validation figures. This does not imply of course that the decision should rest on purely subjective considerations.

GIGO – garbage in garbage out – may be one of the handiest acronyms ever devised. However sophisticated, a mathematical model won't be helpful if it is based on data that isn't relevant to our problem or if the data cannot be trusted.

Probabilities versus categories

We humans love to categorize and are strongly inclined to think dichotomously, which is perhaps why we also love to quibble so much about definitions. Categorization can be useful, especially when quick go/no go decisions are absolutely required and hard evidence is scant, but this mode of thinking can introduce rigidities and encourage bad decisions. Though it does not come naturally to us, thinking in terms of probabilities, especially conditional probabilities, will often lead to better decisions.

Keep an open mind but keep it simple

Confirmation bias is a very human tendency to search for or interpret information in ways that confirm our preconceptions. On the other hand, we also need to be wary of falling into the trap of assuming that an exotic theory is necessarily a valid one! Simple answers, even if boring, are more likely to be true than elaborate ones. We marketing researchers are frequently guilty of both these cognitive sins without being aware of them and without any conscious attempt to skew the results of our research or find something sexy. Avoid confusing the possible with the plausible and the plausible with fact. It's also not difficult, though, to miss something of genuine practical significance that lies hidden beneath the surface of our data, so caution in both directions is urged.

A few more tips

• Do your homework. Many phenomena have more than one cause and I would urge you to integrate data and information from diverse sources and to adopt a holistic perspective with regard to analytics. Printing "Think Multivariate!" on our T-shirts may be impractical but nevertheless it's a useful mind-set for marketing researchers to have.

• As a rule marketing research is most valuable when motivated by specific business objectives and when the research design and interpretation of results are closely tied to these objectives. Note that research need not be immediately actionable and can also add value by providing context, for example in market entry feasibility studies.

• When designing research, first consider who will be using the results, how the results will be used and when they will be used, and then work backward into the methodology. Don't let the tools be the boss.

• Develop hypotheses, even rough ones, to help clarify your thinking when designing research. These can be formally tested against the evidence when data become available.

• Take care not to over-interpret data. I have witnessed instances in which detailed profitability calculations have been made based on data that should really have been interpreted directionally – not as precise figures – or even ignored.

• When you observe a pattern of potential interest in the data, before jumping to conclusions it's best to ask yourself a few basic questions: 

  1. Is this pattern actually real? 
  2. Is it strong enough to be meaningful from a business point of view? 
  3. If it is, what are its business implications? 
  4. What could plausibly have caused this pattern? Are there other likely causes?
  5. Do I have real evidence that what I think are the causes are the actual causes?

• Remember that a decision made too slowly is a bad decision ... but a bad decision made hastily is not a good decision either.

• Be skeptical and don't let yourself be pressured by the opinions of "thought leaders.”