When you’re not a pet rock

Editor’s note: Laurie Gelb is director, market research and planning, at HealthCore, Inc., a Wilmington, Del., health care marketing and research firm.

As a client or vendor, I’ve monitored or conducted tens of thousands of phone-based and in-person groups and individual interviews (not to mention the Web, since that’s another article). It often seems as if the fundamentals have been waylaid by the seductions outlined below, spawning “we already knew all this” complaints. The thesis of these six deadly sins is that qualitative research is at best natural conversation and at worst performance art. Since you probably don’t want to be a starving artist…

Henceforth, “researcher” signifies either the interviewer/moderator or a person that is selling, using or analyzing the study; in context, it should be obvious which. “Client” refers to both external clients and internal customers: marketers who need useful business intelligence for informed decision-making.

The top six reasons that they don’t always get this information are:

  • Ask me no questions, I’ll tell you no lies.

Managing client expectations begins at the pitch stage, when researchers may allow clients to believe that flying or calling coast-to-coast ensures “nationally representative findings.” You can characterize L.A./Phoenix/Chicago/Houston/Miami/Philadelphia results as “geographically diverse” but to act as if these respondents can represent all the potential respondents not there is counterproductive. Self-selection, recruitment and interviewer interaction biases are more prominent in any qualitative study, just as interviewer-assisted quant studies engender greater interaction bias than self-administered studies. If you need a nationally representative sample, you need quant. Multi-city qualitative studies cannot substitute for quant. “Semantics,” you say. “We know qual is directional.” Oh yeah? Read on.

  • Presto! Let there be quant.

Under the illusion of “representativeness” noted above, researchers may bring quantitative instruments into the qual setting and report the aggregate (or worse, subgroup) results as if they represented individual data points, thereby choosing a quicksand pit as a building site. Though elementary, my dear readers, if you interview 38 people in your “national” qualitative project, whether singly or in groups, whether they represent 38 metro areas or three, you do not have an n of 38 independent cases. Only respondents in a few areas had a non-zero chance of selection; there are more than 38 metro areas in the U.S.; three of your respondents may have signed up with the same research center as friends and so on. The misconception that qualitative findings should be cut-and-pasted into quant design rests on this faulty premise as well, but that’s another story.

Qual must provide context that numbers can neither replace nor explain, or there’s no reason to do it. It’s reasonable to ask what someone would anticipate doing under certain circumstances, or how, if at all, participants would differentiate various stimuli. However, those answers are integrally connected to the “what, when, where, why, how” that presumably the rest of the interview has been about. Understanding this connection is the “beef” into which marketing can sink its teeth. If clients ask for quant instruments in exploratory settings, I politely explain why these could compromise our objectives, and then outline what the research will do.

There’s nothing wrong with yes/no and structured or numeric questions as they might occur in real conversations. There is something wrong with aggregating the results as if they were the Harris Poll, or separating them from their context. This also argues against routine “head counts” for questions or forced differentiation. The information the client needs should be in the verbatims, not a show of hands. Just because we can force respondents to comment that layout A is very “green” doesn’t mean we learned anything. If we aren’t presenting stimuli that can evoke different reactions and preferences and allowing exploration as to why the responses are different, we have brought inadequate stimuli to the table; torturing the respondents all night won’t change that.

As for the notion that using card sorts, rankings, ratings and such will “facilitate discussion,” in over 20 years of interviewing (and twice that as a conversationalist), I can’t recall ever needing a quantitative catalyst. Do you? Sometimes, perhaps, these tools are attempts to substitute for conversational skills/product category knowledge. But interviewers who look or act ill at ease should be given more prep/training, or replaced, not handed stacks of forms. Maybe good conversations aren’t as easy to sell (sounds too simple?) or even deliver. But the effort is well worth it.

Besides wasting time, superimposing quant reroutes the discussion. Mid-conversation with your friend, do you ask, “How was your date with George? Here, do this attribute rating task so I can more fully understand your viewpoints.” When we try later to reconcile free-flowing conversation with eked-out data, we are no longer doing qual work, or anything else useful. Apples and oranges…

Turnaround time often drives the perceived need to quantify qual, of course. However, given many options for fast-turnaround quant, there is no real justification for sacrificing qualitative fundamentals on the altar of deadlines.

  • It’s not a product; it’s a bundle of attributes.

We could spend hours discussing how this assumption has constrained market insight for products where attributes are neither readily changed by the manufacturer nor independent (biopharma is an excellent example). “Which is more efficacious, drug A or drug B?” is a red herring in any setting. What qual can tell us is:

Do perceived efficacy differences, if any, actually affect decision-making between drugs in this class? If so, under what circumstances and why? If not, what does and how?

Qualitative is no better place than quantitative for the faulty assumption that all decision-makers are consciously trading-off all attributes all the time. Nor is it a setting in which to “validate” attributes (domains and measures) and levels (threshold values) used to make decisions where the attributes are not universally salient and defined (two vs. three bedrooms is clear, a “crunchy” vs. “not crunchy” cereal less so). The shortcuts used to decide between products whose attributes themselves are a judgment call demand other methodologies, e.g., taste tests for the cereal or heuristic market research for pharmaceuticals.

  • It’s been 15 minutes. I know these people now.

“Jane, you indicated earlier that Thrill Park was open for an entire year before you took your kids there. Obviously, you’re very cautious about new destinations. What would it take for you to go to Chill Park within two weeks after it opened?”

“Barney, you mentioned that you went to Thrill Park the day it opened. I’m assuming you plan to visit Chill Park on its opening day as well. No? Why not?”

Clearly, this is leading the witness. If your friend has just told you about a disastrous first date, do you immediately say, “Well, it’s clear that you won’t be going out with George again” or do you wait for her to tell you that? It’s certainly going to take more than 15 minutes worth of experience with her to know!

When left alone to tell their story, people generally articulate the truth we are seeking. Recently, I interviewed individuals suffering from certain symptoms, most of whom had not sought help. Thirty minutes into our conversation, one deeply conflicted respondent confided his fear of certain medications because of past recreational drug abuse. Even knowing that drug abuse is common in this population, could I have broached that earlier and obtained the same response? Another sufferer ultimately admitted his concern that seeking help would create a pre-existing condition issue with his health plan. Again, not a Q1 finding. A third was worried about his commercial driver’s license. And so on. Though I have also interviewed patients whose disorder can be transmitted through infected drug needles, I did not broach drug abuse to them, because not all of them have this history. Airing your assumptions ices the dialogue.

Insight for us frequently requires introspection and even self-discovery for the respondent. An engaged participant is building/sharing something, not getting to the end. There is always more to say. When interviewers treat an interview as a straight line, words on paper, subjects clam up, join the race and accept the constraints implied. They’re being paid to. But that only costs the client, ultimately. Techniques to establish rapport and encourage conversation are found in “Establishing a Comfort Level” by Jim Eschrich. (Quirk’s, April 2002)  < /EM>

  • The discussion guide as personal flotation device (PFD).

This was my pet peeve as a client. Qualitative researchers, as opposed to copy readers, are paid to ask the questions that are not on the discussion guide and never can be. The raison d’être of qual is to follow the respondent’s context, not provide it.

Joe says, “I can’t really distinguish between widgets and digits.” We should ask, “And why is that?” And then depending on the response, maybe, “Under what circumstances could that change?” or a more specific follow up. What I too often hear is, “OK, let’s go on to the next question.” Arrrgh!

Sally explains, “Those criteria don’t make sense to me. I just go by the blue color.” Next should be, “Is there anything else you have ever considered besides the blue color?” Or, “Are there ever times when the blue color is less important?” I cringe to hear, “Does it matter what shade of blue it is?” and then watch the interviewer move to the next topic.

Questions like “How will this study be used?” or “Has anyone else mentioned that?” or “Who’s sponsoring this study?” often generate mumbles, haughty comebacks or refusals to respond. Agencies should draft pat responses for client approval and train interviewers to partner with, not patronize, the respondent. For example, if the sponsor must remain blinded throughout (not a given; sometimes the conclusion is a good time to uncloak with a couple of wrap-up questions), one possible answer is, “The identity of the company sponsoring the study is confidential, just as your identity is kept confidential.” It is not a blow-off like “I can’t tell you that” or the disbelieved, even if true, “I don’t know.”

In order to follow the respondent’s lead, and talk about what’s important to her, researchers must be familiar with the topic. When they aren’t, sometimes the only unscripted words spoken are the ubiquitous “OK,” or even worse, “Good” or “That makes sense” or similar reinforcement. Or the fishing line probe: “Can you expand on that a little?” or the ever-so-precious, “Tell me more.” When did you last speak those words out in the world?

Another PFD danger is the rush to get all the canned questions in, manifested as patting the respondent on the head after short answers and interrupting long ones. An approval-seeking exchange creates a breezy but superficial conversation, and a skilled interviewer nips it in the bud by probing objectively: “Just to play devil’s advocate…” but never, “Many of your colleagues have said…” An interviewer should gently decline to restate the majority viewpoint: “This is about what you think/how you reacted to that/made that decision at the time.” Certainly, we want to know if the respondent would make a different decision today, but we want to find that out without seeming to pass judgment on her.

In groups, the PFD mentality sends the moderator around the table with the dreaded, “What do you think, Ryan?” though poor Ryan has contributed to eight straight questions and is now trying to guzzle his soda.

You might casually ask a lunch colleague, “What about you, Anne?” but Anne knows she can say, “Oh, I dunno” without becoming an outcast. Focus groups are not election booths, which are private. If you need bared souls, book one-on-ones. Moreover, groups as opposed to individual interviews are only appropriate when the probability of achieving the objectives is greater with consensus than without it. Yet groups are often booked for less substantive reasons (such as scheduling). With today’s Web-enabled facilities and phone monitoring, travel preferences should never dictate research strategy.

Repeating questions is another common consequence of the PFD approach, and nothing derails a conversation faster. The respondent becomes bored, impatient, even angry. “Isn’t this person even listening to my answers? Why am I talking, then?” (Would your boss stand for this?) Any client who berates an interviewer for not asking Q13 when the topic already was covered in Q6 needs tactful education. “We had already covered that, so I went for his reaction to the fried egg concept. I know we will need to use our time as efficiently we can.”

  • “Tonight’s performance is sold out.”

That sentence sums up many interviews. We’ve all seen the interviewer whose self-importance is exceeded only by his ability to express it. “What I’m looking for is…” “That’s not what I meant.”

The only legitimate uses of “I” in probing are non-judgmental, e.g., “I guess I’m confused - I thought I heard you say earlier that you only buy blue widgets but just now you spoke of preferring green ones?” Otherwise, substitute “you” for “I” and see how much further you get. Do you tell your colleague, “Carl, what I was looking for you to tell me was what Mark thought of the report” or do you simply ask, “What did Mark think of the report?” Presenting the interviewer, a person the respondent has (hopefully!) never encountered before, as someone whose needs must be met heaps pretense on an already artificial context. And if you are using professional respondents for whom this is no big deal, you’re even worse off.

Interviewers who “rep” any point of view (like the client’s) also taint the feedback. Anyone who thinks the interview would be smoother if she follows the path of least resistance must either follow or become oppositional to that viewpoint. None of that will hold up once she’s gone. Being truly persuaded (not bludgeoned) by other group members is fine. External influences on decision-making exist, and if we engage participants, they will tell us when/how that works. But we have to understand first what the respondent was thinking before she walked into the facility or picked up the phone. If we fast-forward past that point, we’ve lost insight in which we invested.

Finally, when the play’s the thing, the researcher vamps to his clients/colleagues, takes stage direction during and between interviews and agrees (outwardly) cheerfully to instructions like:

“Wrap this one up. This guy is clueless.”

“No one’s responding to ad #3. Make sure you get more out of them.”

“The whole group hates the double-page spread. Let’s leave it out from here on.”

Following these suggestions blindly could obviously compromise results. But what mid-project adaptation is actually useful? The short answer is, do what the project needs, err on the side of inclusion and justify it in terms of the objective, with minimal use of “I” or “you,” two words that in a client/researcher discussion may breed defensiveness or aggression. We’re aiming for smiles and nods during the final presentation; we may not see as many in the viewing room.

Be a chameleon

Perhaps the best advice for qualitative researchers and clients resembles Ray Bradbury’s advice to writers (keen observers themselves): “Be a chameleon, ink-blend, chromosome change with the landscape. Be a pet rock, lie with the dust…”