Look beneath the numbers

Editor’s note: Ron Sellers is president of Ellison Research, Phoenix.

With talk of the 2008 election already heating up, we’ll soon be bombarded by election polls. Rather than relegating them to the political arena, there’s a lot business leaders can learn from these polls and how they are used and reported.

As seemingly contradictory polls are released every week through news organizations, there are a number of lessons for people who use marketing research as part of their day-to-day jobs.

Most people will come away with only a few key insights - right or wrong.

One thing we can learn is that people will typically shorten research findings to a few key results - even if those results don’t address the real meaning of the study. This is how a 46-to-45 percent result in a poll with a four-point margin of error gets turned into a one-point “lead.” It’s just easier (and more interesting) to communicate things this way, even if it’s inaccurate.

In business, if you present a finding such as “90 percent of potential customers had no specific complaints about our product,” watch how quickly people shorten that to “90 percent of potential customers like our product” or even “We have a 90 percent satisfaction rate,” even though those things are not the same at all.

As researchers, we must communicate in as clear and concise a manner as possible, and serve as watchdogs within the organization to make sure the way the findings are actually used is consistent with how they are meant to be used.

Conflicting poll results might not be measuring the same things.

On the same day, one national poll suggests the Republican leads the Democrat by 4 percent, while another suggests the Democrat leads the Republican by 5 percent. The results are outside the margin of error (3 percent for both studies), so how can this happen?

The two different studies might be asking different questions. Question wording can heavily influence the outcome of research. For instance, “If the presidential election were held today, would you vote for Senator Hillary Clinton, Senator John McCain, or someone else?” is not the same thing as, “Who are you most likely to support in the upcoming presidential election, Democrat Hillary Clinton or Republican John McCain?”

Including party identification in the question can produce higher polarization (Democrats voting for Democrats, Republicans for Republicans). Asking people who they are most likely to vote for, compared to who they would vote for today if forced into a choice, is simply asking different questions. “Vote for” and “support” also may carry different meanings to respondents. Where the question is placed in the questionnaire also carries an impact; asking a series of questions about key issues one candidate supports and then asking who voters are likely to support can influence the survey outcome, for instance.

Question construction is enormously important in business-related research. Asking “What do you dislike about our product” is not the same thing as asking “What are the things you like least about our product?” Questions should be as simple, clear and concise as possible, and be open to only one interpretation. Survey results are only as good as the systematic methods used to produce the results; this includes the way questions are phrased and the order in which they are asked.

Just because it’s reported doesn’t mean it’s accurate.

Reporters are not researchers, and often don’t understand (or sometimes don’t bother with) details such as the sample composition or the margin of error. Editors can bury or delete important details that seem too technical for the average reader or viewer. Yet these details are often crucial to knowing exactly what the research really discovered.

You’ll have the opportunity to read about plenty of surveys and studies in the media that might be meaningful in your work - but just because a trade publication is reporting that 30 percent of Americans engage in a particular activity doesn’t necessarily mean the data is being reported accurately.

Ellison Research releases significant amounts of non-proprietary research information to the national media, and it is unfortunately rather common for the media to get key details wrong. For example, we’ll release a study conducted among a representative sample of clergy from all Protestant denominations and include findings from subsamples of larger denominational groups such as Methodists, Lutherans and Baptists. The media frequently reports this as a study of “Methodist, Lutheran and Baptist ministers,” or somehow decides that “Methodist” must mean “United Methodist” - excluding all of the other Methodist groups included in this category. The information we release and the eventual coverage that results sometimes has relatively little in common.

Before you rely on a critical statistic you read in an article or hear quoted by some expert, check it out carefully with the source - you may find that the key survey finding you were going to rely on to build your business case or presentation isn’t really what it seemed to be.

People will infer causality even when they shouldn’t.

A candidate makes a snide remark about senior citizens, and three days later a poll shows his support among seniors dropped by eight points. Obviously it’s because of his unfortunate remark, right? Not necessarily. It could be he also supported a bill many seniors oppose. It could be he trivialized an issue important to seniors in another speech. It could be his opponent said something highly meaningful to seniors in her speech, which has swayed some senior support over to her.

But it’s much easier to assume a link between the remark and the dwindling support, because it seems so obvious. Particularly through the media, where people expect instant news and analysis, polling sometimes isn’t in-depth enough to get at the key issues.

Business too often makes the same mistake. We want to make obvious connections, because they’re, well, obvious. But survey research cannot infer causality, and bivariate analysis doesn’t always tell the whole story.

Let’s say your customer satisfaction study shows African-Americans are significantly less satisfied with your service than are other customers. You might immediately try to figure out why this particular segment has concerns about your service.

But digging further, you find that your African-American customers are also substantially younger than your other customers. Now, is your lower service satisfaction connected to race or to age or both? Simply looking at the crosstabs may never tell you - it will take a multivariate statistical analysis to determine if there is a relationship between only one or both of these variables and customer satisfaction. Even then, you may find out there is a strong relationship between age and satisfaction, but you cannot accurately say a younger age “causes” lower satisfaction.

Labels are easy, but often inaccurate.

Consider a political poll saying one candidate leads the Democratic primary among Catholics. Seems simple enough. But what defines a Catholic, for the purposes of this study? Is it people who are baptized in the Catholic church, regardless of whether they are active in it? People who call themselves Catholic? People who actually attend a Catholic church occasionally? People who attend regularly?

It’s easy to label people, but not always accurate to do so. This is also why so many studies seem to contradict each other - one poll may define Catholics (or moderates, or evangelicals or any other group) one way while another defines them in a completely different manner.

This happens in business, as well. Who are your customers? Anyone who bought from you in the last five years? In the last six months? Or who shopped you even if they didn’t buy anything?

If you have multiple sales channels (e.g., direct response, e-commerce, wholesale and retail), is a “customer study” measuring all customers in the correct proportions? Is your study going to define “younger customers” in the same manner as one conducted by another department?

Is the study measuring the important things in the right way?

One thing too few people seem to understand is that it really doesn’t matter if 51 percent of the country votes for a particular presidential candidate - what will determine the election is the electoral votes, which are winner-take-all, state by state. Yet people too often look to nationwide surveys to suggest the outcome of the election, because they’re easier to conduct, report and understand - they’re a convenient shortcut, even if they’re not terribly meaningful to the end result.

How often do we fall into the same trap of taking convenient shortcuts in the business world? Surveying customers at a trade show because it’s cheaper and easier than trying to reach a representative sample nationwide. Asking quantitative questions in focus groups because it “saves having to do a follow-up study.” Cutting out the quantitative research because “so many people in the focus groups liked the product.” Interviewing donors or customers online, even though the database has active e-mail addresses for only 15 percent of the file, because it’s faster and cheaper than doing it by phone.

Even if the research is done the right way, is it measuring the right things? Just because most of your customers are satisfied does not mean they are loyal to you. Just because donors don’t think your organization is misusing funds doesn’t mean they think you are using donations for the greatest impact. Just because people recognize your logo and slogan does not mean your brand resonates with them or has a clear position in their minds and hearts.

Research is not perfect and has a margin of error.

Egad! A researcher saying research can’t do everything! That’s because no matter how valuable it is, research really can’t do everything. For one thing, all studies have a margin of error. It’s not uncommon to give customer service representatives a bonus if the satisfaction research among their customers reaches a certain level. But is it defensible not to provide the bonus if the goal is 60 percent complete satisfaction and a survey with a five-point margin of error shows 57 percent satisfaction?

The margin of error also doesn’t take into account things such as whether the question was asked correctly or whether it was analyzed properly.

On top of that, consumers are notoriously poor at accurately predicting what they will do in the future when asked directly - which is why even political polls that show a clear leader sometimes don’t correctly predict the outcome of the election.

This is not to say research is not extremely valuable, because it is. But don’t fall into the trap of thinking research is omnipotent and will predict exactly what your target market will do in the future. Research is a tool that provides you with insight and helps your organization make wiser decisions with a greater chance of success - but it isn’t infallible.

Lessons to be learned

This article is not meant to be a condemnation of political polling or the media. There are many valuable insights that come to us through political polls. However, there are also lessons to be learned, lessons that apply to businesses and non-profit organizations on a daily basis. Applying these lessons means having the ability to make a greater impact in a clearer way on the company or organization you serve.