Editor’s note: Roy Eduardo Kokoyachuk is co-founder and principal of Los Angeles-based research firm ThinkNow.

In 1954, Darrell Huff called out the dangers of misrepresenting statistical data in his book, “How to Lie With Statistics.” I don’t know how big of a problem bad survey data and misinformation was in the 1950s but if you fast forward to 2019, social media and 24-hour news cycles have created an explosion of content that purports to be factual. Chances are, a percentage of it is not.

As a professional market researcher, I probably spend more time reading the small print on market research and public opinion surveys than most. I’ve come across several instances where survey data is misinterpreted, misapplied or just plain wrong. The reasons for this vary. Sometimes they are honest errors, and other times the data is intentionally designed to mislead.

To the trained eye, some of these discrepancies are easy to spot, but not always. Here are a few things I look for when reading polls and market research results to help me identify faulty research.

A common problem with survey results is that respondents often answer a different question than what the survey designer thought they were asking. This can happen because the respondent either didn’t understand the question or their preferred response was not an option in a closed-ended list. The Brexit referendum may be one of the most consequential examples of this issue. It offered a binary choice, stay or leave, without providing a way to capture more nuanced responses. Fifty-two percent chose leave but many voters stated that they chose leave to air their dissatisfaction with the U.K.’s governance and would have chosen something else had there been options that addressed their concerns. In fact, new research from YouGov suggests that only 33% of the British electorate prefer a hard leave option.

Market researchers play a critical role...