Editor’s note: Joe Hopper is the president of Versta Research, Chicago.

The next time you analyze the results of your customer satisfaction or brand loyalty tracking study and you notice an upward or downward shift, ask yourself this: Is it groupreasonable to think that certain customers – either the happy ones or the unhappy ones – were more willing to give you their opinions than the other group?

If so, your results may be an artifact of non-response bias, and it may be a problem that is far more common than we think. Consider this conclusion from a study of political polling by Andrew Gelman, a prominent statistics and political science professor at Columbia University:

[We conducted] a novel panel survey of 83,283 people repeatedly polled over the last 45 days of the 2012 U.S. presidential election campaign. We find that reported swings in public opinion polls are generally not due to actual shifts in vote intention, but rather are the result of temporary periods of relatively low response rates by supporters of the reportedly slumping candidate. After correcting for this bias, we show there were nearly constant levels of support for the candidates during what appeared, based on traditional polling, to be the most volatile stretches of the campaign. Our results raise the possibility that decades of large, reported swings in public opinion – including the perennial “convention bounce” – are largely artifacts of sampling bias.

So what’s a tracking study manager to do?

  1. Always examine the sample composition carefully. Compare personal and business demographics of your respondents from wave to wave to ensure consistency and/or to ensure that any changes reflect real changes in the population being sampled.
  1. Weigh the data on strong correlates. This is what Gelman and his colleagues did to correct for the hypothesized bias they were seeing in response rates for their data. If you know from previous waves, for example, that women give you better scores than men, track the response rates by gender, then weight and adjust the data at the back end.
  1. Caveat your conclusions. Remind your management that tracking opinions over time is no easy task, not even for high-budget, high-profile pollsters who track voting intentions during presidential campaigns.

In short, don’t rest easy just because the fieldwork team says they got the required 1,500 interviews for the current wave of your tracking study. Instead, do the difficult work of analyzing whether the samples are really comparable over time and then make smart statistical adjustments to compensate.