Observe your groups with care

Editor’s note: Lisa Hermanson and Kelly Wahl are partners at SofoS Market Research Consulting, Wauwatosa, Wis.

When focus groups are criticized, it’s usually because they were used when another method was called for or because they were executed poorly. And that’s a shame, because these are avoidable situations which end up reflecting poorly on all qualitative research.

If you find yourself bored in some dark, back room, mindlessly munching snacks, watching the clock and wondering what’s in it for you, chances are you’ve fallen prey to one or more of the following six pitfalls. By understanding these common mistakes, and with a bit of careful planning, you can avoid problems and end up with results worthy of attention and respect.

Pitfall #1: Relying only on focus groups

Traditional focus groups are often an excellent qualitative tool, namely when a conversation among respondents may spark ideas, reactions or insights that might otherwise remain hidden. Focus groups should be all about the build - when one person takes what another said and expands on that thought. Focus groups are especially useful when investigating an area that people don’t often think about consciously or where their thoughts and opinions aren’t easily articulated.

But, because focus groups are discussions among strangers, the tendency is to go fairly broad and not very deep. If you need to go deeper or broader or more subconscious or less subconscious (see how intricate it can get?), focus groups can lead you astray.

Instead: Have more qualitative arrows in your quiver

Myriad other methodologies await you! Established interviewing techniques range from in-depth one-on-ones to buddy groups in dyads and triads to super-sized multi-focus groups with breakout sessions. Additionally, there’s an entire spectrum of purely-observational research to explore behaviors and attitudes that respondents can’t (or won’t) describe. These can be two-day team blitzes or honest-to-goodness ethnographic inquiries lasting six months or more.

In-situ research techniques, including home visits, shop-alongs and real-world usage studies, often combine pure observation with direct consumer interaction. Each methodology serves a specific purpose, and they’re rarely, if ever, interchangeable. Get to know them - how they’re used, their strengths and their weaknesses. Employing the appropriate methodology is the first step to great results.

Pitfall #2: Ignoring the details (“It’s not a representative sample anyway”)

In any research, it’s easy to get a false read by talking to the wrong people. In qualitative, the “who” and “where” are especially important because you don’t have the security that statistical sampling gives you.

Years ago we knew a marketing director who insisted on optimizing brand positionings in his company’s home city, for the sole reason that the team wouldn’t have to travel for the research. Their brand development index was sky-high in this market, and the local respondents thought that the brand could “go anywhere” and “do anything,” which is not surprising for a default brand. Only the brand wasn’t the default anywhere else in the country, so the feedback was meaningless outside of that one market. 

Instead: Be rigorous in selecting who/what/where

When selecting locations, always consider brand development index/category development index, trend-adoption patterns, media usage and general regional distinctions (i.e., social values, flavor palates, lifestyle choices, homo- or heterogeneity of the population and so on). And try not to bend to the pressure of doing groups in some locale solely because the marketing manager wants to add on a ski weekend. Don’t laugh - it happens.

When developing screening criteria, take a systematic approach that considers usage frequency, rejection tendencies and any other demographic and psychographic details important to your category or project. Make sure your criteria are truly relevant to the subject at hand. Don’t blow off the articulation screen; done well, it can ensure that you won’t get duds - respondents who just sit there, mute, waiting for their incentives. The right screener can prevent the dreaded lament: “What’s THAT person doing in my group?”

On the flip side, remember that you’ll never have a representative sample with the small numbers used in qualitative, so don’t try. Just because your product is consumed or your advertisements are seen by a diverse population doesn’t mean that you should have one white man, one white woman, one Hispanic man, one Hispanic woman and so on. Choose people or groups for your research who you think will be most representative or most insightful, and save delving into each segment’s details for later quantitative studies.

Pitfall #3: Treating the discussion guide like a questionnaire

Qualitative is most useful when you need the color and texture that quantitative can’t provide, such as the language surrounding your product, category or idea; subconscious barriers to adoption; or unconscious consumer behavior. Establishing hard parameters for where the research goes and giving specifically-worded questions limits your ability to discover these colors and textures.

In focus groups, the more explicit the questions in your discussion guide, the less free-flowing the conversation will be and, most likely, the fewer aha moments you’ll have. Lists of questions invite your moderator to engage in serial interviewing, going around the table, asking everyone the same question, recording the answers and moving on to the next question on the list, without encouraging real interaction and conversation among respondents.

Another danger of the questionnaire approach is the all-too-common “horse-race” phenomenon, where your team becomes interested only in which advertising copy/prototype/positioning/new-product idea does the best, when they should be trying to understand why and how each one works (or doesn’t work) in order to improve them. Deeper learning comes to an immediate halt when someone in the back room counts the number of respondents who like one option over another and declares, “Six out of eight - we have a winner!”

Instead: Be “qualitative” in your guide design 

Approach qualitative with general areas that you want to learn about, not a laundry list of specific questions. And make the discussion guide just that - a guide that the entire team uses to help concentrate the research in specific areas. If your moderators/interviewers/observers are good at their jobs, this will allow them to explore, discover, challenge and corroborate, ultimately yielding richer insights.

To make sure that your qualitative is centered around refining, clarifying and improving, communicate this approach to your team before the research begins and reiterate it whenever anyone goes astray. That way you’ll keep the emphasis off which one of two (or three or eight) is the winner and on how the initial options can be improved.

Pitfall #4: Treating attendance as optional (and extending invitations to “drop by”)

For core team members, there’s no substitute for physical attendance at qualitative research. Qualitative is not just about what’s said, it’s about how it’s said. From voice inflection, volume and tempo to pauses, facial expressions and body language, these nuances are all but impossible to catch without witnessing things live and in-person.

According to Albert Mehrabian’s groundbreaking and oft-quoted study on the communication of feelings and attitudes, 93 percent of such communication is non-verbal, either in tone, volume and inflection (38 percent) or facial cues (55 percent). Mehrabian’s study is especially relevant to qualitative research because it involved inconsistencies between words and non-verbal signals when discussing likes and dislikes.

A team member who’s getting only the spoken words (e.g., from reading a written transcript) is getting only a small fraction of the communication, and the fraction that’s least likely to be spontaneous and honest. Another, who’s getting only the words and vocal inflections (e.g., from listening to tapes or watching a low-res video), is still getting less than half of what you’re paying good money to learn. As a result, these team members’ ability to help you interpret what you’re seeing and hearing will be severely impaired, and they can easily take you off track when they honestly (but completely) misunderstand what a particular respondent meant.

For example, when a consumer says “It’s OK,” it may be a non-committal approval of your product or idea. But a slight change of tone or a quick glance at the ceiling changes it entirely; it becomes a clear, but guarded, condemnation, with a subtext of “I don’t like it at all, but I don’t want to be disagreeable” or “I hate it, but if someone else likes it, fine for them.” An involuntary chuckle, grimace, eye-roll or smile will almost always tell you more than words, and those indicators will always be absolutely essential when interpreting a respondent’s words.

Instead: Make sure the core team attends the research (all of it!)

Everybody is busy, but if your project is worth the time, effort and money you’re putting into it, it’s worth your core team’s full attention for the brief duration of the actual consumer or respondent contact.

Attendance means really being there - paying attention, listening for nuances and watching for body language. During focus groups some years ago, a colleague who was checking e-mail half-heard a respondent say, mockingly, about some ad copy, “Oh, your product saves the day!” But, because he was only half paying attention, all he caught were the words themselves. He left thinking it was a positive reaction and returned to management saying, “Consumers think our product saves the day!” It would be easy to laugh at him, were the mistake not so common.

Attendance also means being there for all of the research. “In the group I saw ...” is something you never want to hear. After all, if seeing one qualitative session were as good as seeing all of the project’s sessions then you would only need one session. In qualitative, part of the challenge is interpreting what everyone has seen and heard within the context of the entire project.

We all have a tendency to generalize, and the fewer respondents/subjects one sees, the more likely those generalizations are to be off-base.

It’s important to remember that we’re only talking about attendance for your core team - those who are highly vested in how the research will be interpreted (and who will be part of that interpretation). Attendance by non-core members should be avoided as much as possible; their peripheral involvement can translate into lazy viewing and misinterpretation. Besides, the larger the group, the more unwieldy and less productive your final debrief will be.

Pitfall #5: Just watching and listening

Some marketers and research managers think that once the guide is approved, it’s the supplier’s gig. The client watches the proceedings, maybe sending in a question or two along the way. But doing this is, again, treating qualitative like quantitative: a static test where the answers will emerge.

This is dangerous not only because the answers in qualitative are rarely obvious at first blush, but also because qualitative is meant to be a dynamic tool, one that is adapted as you learn and progress. With each insight you gain, another road of investigation opens up. It’s when you don’t take those roads that you find the final day of the project boring, because you’ve “learned all there is to learn.”

Instead: Treat your research as a living organism

In fact, one of the beauties of qualitative is that it is flexible. This is very different from quantitative, where you need carefully-standardized stimuli and questionnaire structure. If a qualitative stimulus can be improved during or between sessions, make that change to improve your research in turn. If you observe certain behavior over and over again, look deeper to understand its intricacies, how it changes, how it affects other behaviors and how other behaviors affect it.

When well-constructed, almost all qualitative can be iterative in some way or other, meaning that you use the results of one round to create new stimuli or direction for the next. If you haven’t changed at least one element (e.g., one stimulus, one area of inquiry, one focus of observation) from the first session to the last, then you haven’t optimized the process and haven’t learned all you can.

Pitfall #6: Downplaying the debrief

You’re tired. It’s been a long day, and everybody wants to get back to their hotel rooms. Can’t the debrief wait until tomorrow? Or next week, when you’re back in the office?

The short answer is no. Good interpretation is essential to good qualitative, and your debrief, with everybody processing - as a group - what they’ve just experienced, is a crucial element. It’s crucial after each session (or day) because that’s when you decide how to adapt the research to maximize continued learning. And it’s crucial at the end of the project to gather everyone’s individual perspectives on the research and mold the information into a unified group of insights.

The timing of the debrief is crucial, as well. To be effective, it should be held immediately after the respondent discussions or observations end. Next week, or even tomorrow, things will be forgotten and details confused. One team member will remember one thing, while another remembers just the opposite. Add to that the 15 other projects each team member is juggling and you have a recipe for disaster.

Instead: Include a formal debrief as part of the research schedule and make it compulsory

It should be structured, to-the-point and with the entire core team. Have a debrief outline or guide prepared in advance so that you’re efficient and focused on key objectives and insights. Have your supplier lead the debrief, much as they would a focus group. Formally record the learnings you agree upon as well as those that you don’t, along with implications and next steps. This will ensure that you have the critical knowledge needed to make decisions and move forward.

Not easy

Despite what many people believe, qualitative is not easy, but doing it poorly is. Think of all that goes into designing and executing a good qualitative study: choosing the right methodology; screening for the right people; crafting your guide; adroit management of the interaction with your subjects; and culling relevant insights and then consolidating, filtering and applying them to your specific business-decision needs.

Work with your qualitative supplier to incorporate these approaches into each phase of your project planning. You’ll find that your project will run more smoothly and productively and, in turn, will generate more insightful and rewarding results for you and your team.