Editor’s note: Rich Raquet is president of TRC, Fort Washington, Pa. This is an edited version of a post that originally appeared here under the title, “Does the election surprise mean surveys are not reliable?” 

The surprising result of the election has lots of people questioning the validity of polls: How could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question.

As someone whose firm conducts thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were wrong and how I think that problem could impact our work.


1. People don’t know how to read results.
Most polls had the race in the 2 to 5 percent range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect the election to be very close.

More importantly, from the standpoint of market research is the fact that a difference of 0 to 5 percent is rarely meaningful. Imagine you are testing two positioning statements and one is preferred by 45 percent and the other by 48 percent. Are you likely to choose the 48 percent message based on these data alone? I think it more likely that you will see these data as telling you that there is no difference and look for other data (either within the survey or outside it) to drive the decision.


2. The respondents were targeted incorrectly.
Prior to the election, my friend and mentor posted a question on Facebook about whether the reported large number of new registrations was potentially skewing the results of polls. This brings up an important aspect of polling … the goal is not to represent the opinions of the country, but rather to represent the opinions of people who will actually vote. Since this group changes with each election (for example, the past five presidential elections have featured 49 to 57 percent of eligible voters), this is very challenging. Different firms use different methods to predict this but ultimately there is a high degree of art as opposed to science. Early reports are that rural voters came out in big numbers and most polls didn’t fully account for that.

Other types of research can have this problem. For example, if we want to figure out the preferences of folks who are going to buy a car in the coming year, how do we know who to talk to? Typically, like pollsters, we will ask their intention. Like with voters, consumers don’t always follow through on intentions. While we can’t eliminate this problem, we know that intentions heavily correlate with behavior. Researchers can ask about future buying behavior and then check back to see what they did. The higher the purchase intent, the more likely they were to follow through. Certainly, some with high intent didn’t buy and some with low did, but the correlation was very strong. As such, any error this creates should be relatively small. Large enough perhaps to make election polls that are within 3 percent wrong but not large enough to cause us to lose trust when making business decisions.


3. Can we trust respondents?
When doing surveys researchers rely on respondent honesty and the same is true of polls. In both cases, however, there are situations in which you should be on the lookout for potential dishonesty.

In political polls, one major possibility for this is often referred to as the Bradley effect. I like to think of it as the embarrassment effect. Imagine you are voting for a candidate that your family and friends dislike intensely. When the discussion turns to the election you might decide to keep quiet rather than draw the ire of people you like. Now imagine you are doing a survey about the same race. You might answer the vote question as “undecided” even though you are in fact decided. In this election, there was an unusually high number of undecided voters and they broke overwhelmingly for Trump. Given his position in the polls and often negative news about his campaign, it is not hard to imagine people being cautious in admitting he had their vote.

In product research, passions are rarely as strong as they are in politics, but that doesn’t mean we can ignore this. For example, I am always careful with pricing research. Respondents are consumers. We know, for example, that techniques like laddering will cause bias. If I offer you a product at $10 and you say “no thanks” and then I offer it at $8, some respondents will be savvy enough to guess that another “no thanks” will result in an even lower price. That’s why ideally I prefer the use of conjoint or monadic designs over laddering. When I do have to use laddering, I use other means to try to keep respondents honest.


4. Non-response bias and other targeting issues.

This is not a problem unique to elections. Cell phone-only homes, caller ID and so on have been an issue in our industry for decades now and the move to the Web didn’t change that. A shrinking minority are participating in surveys in any form and our results could be badly skewed if their opinions differ from non-responders. There is no way to know if this is the case … we can only surmise that it might be.

This is not a new problem and thus far it has not proved to be a crippling one either. Both polls and market research have proven their value even as response rates have dropped. It is well worth wondering if non-responders to a certain survey are likely to think differently than responders. This is why we should seek to draw the most representative sample we can and make our surveys engaging so that a higher percentage of people participate.

I also think that the potential for bias differs depending on the subject. For example, people who value privacy are far less likely to do surveys (they screen calls, they don’t join panels). If they also hold similar political views then polls are likely to be skewed a bit as a result.


5. Some of these polls are just bad research.

The polls were not universal in their results. The Investors’ Business Daily Poll was (as it has been for several cycles now) pretty close. The final poll had the race within a point. Others were farther off and most were around that three-point range. These polls all used different methods for data collection (from phone to automated phone to Web and so on) and different sample schemes. Some were done by partisan organizations and others by news organizations. Some were transparent in how they developed their results, others were not.

There is little doubt that some of these were intentionally done badly (to make a particular candidate look good) or not done with care. Don’t work with a firm that doesn’t dot the i’s or cross the t’s.


The failure of polls is not a universal condemnation of survey research. Rather, it is a warning that you need to get the fundamentals right and you need to put those results in the proper context. I think this is something that quality research firms do well and thus clients can have confidence in the research we do.