Editor’s note: George Silverman is president of Market Navigation, Inc., an Orangeburg, N.J., research firm. He is a member of the Qualitative Research Consultants Association.

When quantitative findings seem to contradict qualitative findings, which should you believe? Often, the automatic assumption is that the quantitative findings must be right. After all, the quantitative data come from a large, “scientific” sample and the findings are expressed in a number to one or two decimal places, so they must be precise. Others believe that the qualitative findings must be right. After all, they go into deeper human motivation instead of counting the answers to relatively superficial - or even wrong - questions.

Neither stand is necessarily correct. Qualitative research and quantitative research complement each other. In any given case either may be correct or both may be correct - about somewhat different questions - even though the findings may seem contradictory. Of course, they may both be wrong. So how do you interpret the findings in this kind of situation?

This is a very difficult subject that raises fundamental questions about how we separate truth from illusion. There are no easy answers here. I wish I could say that in a conflict between qualitative and quantitative findings, always believe quantitative or always believe qualitative. That is how some people operate, particularly people who have a quantitative bent. They think that qualitative is fuzzy stuff that you do to refine the questionnaire before you do the “real research.” They would be well reminded that a number is the result of a mathematical operation, not necessarily the solution to a problem.

Also, let’s remember that bad research can yield any findings whatsoever. So one would expect bad research of any kind to contradict bad research of any other kind. I’m assuming that we are talking here about soundly designed research, competently executed.

Let’s take some examples that have happened to me several times.

Example 1: Desired attributes differ between focus groups and survey.

You conduct a series of focus groups and determine that a given set of product attributes are most desired by prospects. A survey produces an entirely different mix of attributes and/or an entirely different order of the attributes. Which should you believe?

As I said, there are no easy answers here. Let me point out a few issues for your consideration as you figure out the apparent discrepancy. Determining which set of attributes most closely fits the actual situation will probably depend on how the questions were asked. Product attributes are a funny thing. When you ask people what they want in a particular product, they tend to come back to you with the “must-haves.” These are the attributes that are absolutely necessary to their even considering the product. However, all products must have these attributes in order to be considered. So they are not what I call the decisive attributes, the attributes upon which people decide when considering their final alternatives. For example, if you ask people why they bought a particular minivan, you will get answers such as quality, service, styling, etc. However, if you ask people to describe their experience of purchasing a minivan, or ask them to tell stories about purchasing a minivan, or use a variety of other projective techniques, you’ll soon discover that minivans are purchased based upon cup holders and other things that most of us would regard as trivial amenities.

The basic attributes having been satisfied, people look for the small points of differentiation. They would never choose cup holders even if someone were astute enough to put that attribute into a survey. They probably wouldn’t even bring it up in a focus group under direct questioning, except in the form of a wisecrack. (Many a true word is said in jest. Take the wisecracks seriously.)

So, the point here is to examine very carefully exactly how the questions were asked and how meaningful the answers are likely to be. When you ask people for attribute lists, or have them rank attribute lists, all you are getting is the answer to the question, “How do people consciously rank attribute lists?” How people actually act on their ranked attributes is an entirely different matter.

To directly answer the question of whether qualitative or quantitative is likely to have yielded better answers: if approached in the traditional ways, in this case neither is likely to be correct. The best ways, in my opinion, to identify which attributes are actually causing brand choice are indirect, projective qualitative techniques and indirect statistical quantitative techniques.

Ponder this classic example of the testing of brightly colored inexpensive cameras: People in focus groups who were shown the cameras loved the idea. People answering surveys were relatively neutral. But when people were allowed to pick one of the cameras to take home, they all picked black! Behavior trumps talk.

Example 2: Focus groups love product, sales prove otherwise.

A series of focus groups tells you that opinion leaders, customers and prospects love the product. But the sales curve is declining, and surveys indicate that while there is no dissatisfaction with the product, people have no intention of buying it.

Example 3: Sales are soaring, surveys indicate high eagerness to try, but focus groups indicate product dissatisfaction.


Conversely, the sales curve of a new product is going through the roof, and surveys indicate that people are extremely eager to try the product. They even indicate that they would pay much more for the product than its current selling price. The situation is interpreted as a smashingly successful product launch, with even additional pent-up demand. The product management team and their agencies are drinking champagne. However, you discover, in some focus groups that were originally designed as a disaster check on some ad copy, that the initial users are encountering difficulties after a few months of product use and dropping the product. In fact, the initial users are extremely disappointed, and many are angry.

Let’s look at Example 2 and 3. First of all, it’s important to understand the nature of sampling. I’m fond of saying that one person’s bias is another person’s sample. When you include early users of a product, you are automatically selecting experts, innovators and early adopters. That is often an extremely productive thing to do, and I wish it were done more often. But remember that you are automatically selecting a different type of person than you will reach in an overall blanket survey. Also, since these are very small numbers of people, they will make up only a very small part of the sales curve. So, when the experts, innovators and early adopters are raving about a product, as in Example 2, you are working with a very promising product indeed. Surveys and sales curves are likely to seriously underestimate the potential of the product - as long as a way can be found to bridge the chasm to the early majority. This product is likely to succeed no matter what the quantitative data suggest.

Example No. 3 is a very frequent occurrence that has cost many product managers their jobs. Sales are soaring, surveys are positive, but focus groups indicate that people are dropping the product after a period of time. For instance, I have worked with about a dozen new drugs over the years where the initial sales curve was extremely positive, as were many other initial quantitative measures. I call this the “try and drop curve.” As long as increasing numbers of people are trying a new product, the sales curve will go up even if most of them are subsequently dropping the product. The main way to tell a try and drop curve from a successful product curve without waiting for the inevitable precipitous drop is to track groups of triers. The most expeditious and timely way to do that is in focus groups. These people may have used the product that day. In telephone groups or online groups they may even be using the product (a snack food or a drink) during the group. When those groups tell you that the product doesn’t work or has other fatal flaws, run for the hills. Or, if it is a really good product, do something to fix the mess. If you don’t act quickly, the word-of- mouth is likely to overwhelm the rest of the marketing.

Example 4: Focus groups love the idea, surveys of early adopters reject it.

People love the product in the qualitative concept development phase. However, surveys among the potential early adopters indicate that the early adopters feel that the product is taking the wrong approach and favor specifically-named other products. Which do you believe?

This is also a hard call, but the product probably is a loser. People can easily get overly enthusiastic or overly negative in concept development groups. You can read more about how to deal with these problems in an article at www.mnav.com/contest.htm. You have to listen very clearly for respondents’ reasons, attitudes and emotions. For instance, groups of computer store owners loved the Apple Lisa and predicted its success. It was clear that they were reacting to an elegant technological breakthrough but couldn’t answer the inevitable cost-effectiveness questions. “Cool” does not sell a $9,000 computer. So it was obvious that the interpretation (it’s a loser) was the opposite of what they were actually saying (it’s a winner).

Conversely, when the opinion leaders initially hate the product because it lacks technological sophistication, and the more typical people love its simplicity, the money is with the typical people. The Palm Pilot is a great example. So are AOL and Windows.

Example 5: The majority of qualitative respondents say one thing (e.g., prefer Concept X) but a majority of the quantitative respondents differ (e.g., prefer Concept Y instead).

More likely than not, the quantitative finding is correct (unless some special factors like those previously mentioned were at fault) because the small qualitative sample just happened to over-represent the X-lovers by the luck of the draw. This is in fact the reason that quant and qual conflict most often.

Actually, people who say these results conflict are probably making the mistake of thinking that the qualitative serves the quantitative purpose of estimating majority preferences. Rather, an appropriate purpose of the qualitative would have been to discover and understand the thoughts and feelings behind preferences for X vs. Y, whereas an appropriate purpose of the quantitative should have been to estimate the percentages of people who hold particular thoughts, feelings, and preferences regarding the concepts. (This last example and analysis were contributed by Peter DePaulo. Thanks Pete!)

The point of all this is that you have to know what exactly has been asked, of whom, and how the answers fit into the rest of the situation. You will inevitably get different views from different perspectives, but that can round out the picture if the perspectives have been carefully chosen. The meta-point here is that you either need to hire, or need to be or become, a savvy, thoughtful marketing research consultant, not a technician of qual or quant.

I hope that this has given you some things to think about when qualitative and quantitative research show different findings. This article doesn’t even begin to address the complexity of the fundamental differences between qual and quant. That’s going to take a whole book that I urge someone (it’s not going to be me!) to write.

The author wishes to thank Eve Zukergood, CEO of Market Navigation, George Balch of Balch Associates, Oak Park, Ill., and Peter J. DePaulo, marketing research consultant in Montgomeryville, Pa., for their contributions to clarifying the thinking in this article. Any mistakes, omissions, misconceptions, confusions or other transgressions are purely mine, although, believe me, they would have been worse without their thoughtful comments on short notice. © 2000 Market Navigation, Inc. All rights reserved.