Something ventured, something gained

Editor’s note: Julie Wittes Schlack is senior vice president, innovation and design, at Communispace Corporation, a Watertown, Mass., research firm.

Up until about 10 years ago, the differences between qualitative and quantitative research could be clearly articulated. Qualitative data took the form of words and pictures; quantitative data was expressed in numbers. Qualitative was conducted with a maximum of 15 people at a time and usually in real time, while quantitative typically required a sample size at least in the hundreds and could be conducted synchronously or asynchronously. But with advances in text analytics (which enable researchers to quantify what is essentially qualitative data) and the kind of inquiry and feedback enabled by Web 2.0 technologies, the boundaries between the two forms of research are blurring.

Accelerating that fusion is the explosive growth of market research online communities (MROCs). And compounding the confusion is the concept of the “panel community” - panels from which small groups can be sliced off for more intensive qualitative work.

These trends have become flashpoints in a larger debate about sample size, respondent quality and reliability. When does qualitative morph into quantitative research? What’s the difference - if any - between a panel with some community features and a community with some survey capability? Is one environment better able to assure respondent quality than another? And what’s the magic n needed for researchers to feel confident in what they’re hearing?

In this article, we’ll explore the broad question of how research conducted in this kind of social media context is fundamentally reframing traditional choices, creating both new risks and new opportunities. First, though, some definitions and background.

Get to know their customers

Back when Mad Men referred to advertising professionals (and not to the acerbic television show about them), brands tried to get to know their customers through qualitative techniques like focus groups and through quantitative means like telephone and mail surveys. Then along came the Internet, and, just as the advent of television led to a rash of televised radio plays, market researchers used the Web to do what they’d always done - administer surveys - but do so in a more time- and cost-effective manner.

But just as TV viewers started to hunger for more than the chance to watch talking heads, with Web 2.0 and its emphasis on collaboration, conversation and consumer-generated content, consumers began taking a more active role in their feedback to companies, demanding to be not just respondents but advisors. Empowered by public online forums and social networks - some brand-sponsored, some wholly independent - consumers now have venues in which to offer both solicited and unsolicited input to major brands. The locus of control has shifted somewhat from the researcher asking survey questions tied to a moment in time to the consumer spontaneously posting reviews and stories, generating a continuous stream of input.

While that white noise of online sentiment can be useful - especially for objectives like brand monitoring and reputation management - it creates as many problems for researchers as it solves. After all, how typical are these blogging, tweeting online activists? And while it’s all well and good to try to mine insight from their conversations, researchers still have specific questions in need of answers.

Hence the allure of panels and online communities. Both enable the capture and tracking of information about who is expressing what views. Both enable researchers to test stimuli and get specific answers to specific questions in a fairly secure manner.

But they also differ along some key dimensions. Surveys capture sentiment at a moment in time and don’t enable the continuous source of insight that long-term communities do. They can provide more measurable, projectable data but are relatively tone-deaf to the nuance and texture of more spontaneous, consumer-generated, multi-modal expression (i.e., text, images, video, etc.) that communities support. They are useful for confirmation and validation but can be of limited value for discovery and innovation.

In an attempt to provide researchers with the whys behind the whats, some panel companies offer custom or “communi-panels” in which a small group (200-500) of panelists are invited to not only respond to surveys but to talk to the client company via a basic online discussion board. However, these groups are typically short-lived and reactive - reflecting the essentially project-driven approach typical of survey research - and typically generate less engagement and participation than do their long-term counterparts.

In contrast, the hallmark of longitudinal organic or recruited communities (not mere panel spin-offs) tends to be shared passion or purpose, relationships with other community members and reciprocity on the part of the sponsoring brand, which reveals its identity, acknowledges what consumers are saying and, wherever possible, closes the loop with them by sharing if and how the brand is acting on that input. But these very strengths of private online communities - long-term relationship, intimacy, high engagement and transparency - raise concerns about data validity, projectability and bias. And the typical size of these communities - 300-500 members - creates confusion as to whether they’re qual or quant.

Risking and gaining

So what’s a market researcher to do? Reframe the choices. Market researchers need to understand what we are risking - and gaining - by shifting our focus and methods, by looking beyond dichotomies like data vs. insight or quant vs. qual. We need to think in terms of informed trade-offs and then ask ourselves: “Given these trade-offs, what’s the best tool for the job?”

Following are some of the trade-offs that are especially germane to today’s social media environment.

Big vs. small

Historically, the benefits of having large numbers of research respondents were obvious. Large samples could be representative of the general population and robust enough to surface statistically-significant differences between responses, creating confidence in the projectability of the findings. However:

Not every insight needs to be projectable.  When trying to forecast sales, it’s crucial that your survey sample be as typical of your target audience as possible. But it takes only one thoughtful or personal disclosure to shed light on an unmet need or new opportunity. Your most passionate, articulate consumer may not be the most typical one but may well be the most valuable one. And if your target audience definition is specific enough (e.g., platinum-level frequent fliers; men who wax their cars at least twice a month), you may indeed feel comfortable generalizing from a relatively small sample size to that entire target.

Poor respondent quality can amplify mistakes/misdirection.  Straightliners, speeders, phantoms ... a whole new lexicon has sprung up to describe the entrepreneurial, if unethical, group of people who create multiple online identities with the goal of taking as many paying surveys as possible in the shortest period of time. Panel and online community companies are continually developing new methods to identify and weed out these people but their mere existence leads one to question the assumed benefit of large sample sizes.

Big numbers don’t yield high participation rates. In a typical large public forum or online community, 1-9 percent of visitors post original content and the remaining 90-99 percent “lurk.” Even in large private communities (10,000+ members), active participation averages about 20 percent, meaning that 80 percent of site visitors don’t participate. In contrast, research conducted by Communispace Corporation across 60 of its own small (300-500-person) private online communities and over 25,000 members showed an average lurker rate of only 14 percent. The intimacy of small communities makes members feel listened to and helps them be heard, and that, in turn, fuels greater engagement.

These points aren’t meant to refute the legitimate statistical arguments for the importance of large sample sizes for specific tasks, such as sizing markets and predicting sales. But increasingly, some market researchers will rely on smaller, more highly-engaged samples to generate and validate key insights, tweak concepts and test and refine surveys that they then take out to a larger sample size later in the process. Then once a product or campaign is out in the marketplace, they’ll employ Web mining and other passive listening platforms to monitor awareness, buzz and general sentiment.

General vs. specific populations

“Representative” is an increasingly questionable concept. With the majority of the U.S. population online and many American consumers engaged in some form of brand feedback - whether through posting reviews, taking surveys or becoming “fans” of specific brands - it’s difficult to define representative populations in anything but strictly demographic terms. And with traditional market research targets like first-time moms spanning ever-widening age ranges, and broad definitions like Hispanic or African-American ever more open to interpretation, even conventional demographic definitions can be less precise or meaningful than they used to be.

While this trend can be problematic for research in certain product categories, it also creates new opportunities in a market that’s increasingly niche-based and long-tailed. Relevancy in products and messaging is critical to a brand’s success, so researchers often have more to gain by listening to the “right” group of people than they do by trying to generalize findings to a generic population.

There are many ways to gain access to that targeted group - by commissioning custom panels, mining content from the blogs and public forums they frequent, by pushing surveys to members of Facebook and other social networks who fit a given profile and by recruiting them for private communities. But regardless of method, the more successful you are in engaging and retaining them, the more likely you are to arrive at actionable insights.

Pure vs. passionate

Inherent in the notion of a “pure, untainted” sample is that the researcher is having a one-time encounter with a neutral group of respondents. To the extent that this replicates real-world conditions (e.g., testing ad recall based on a single exposure), that’s a useful framework.

But answering questions posed by an anonymous brand, with no view into if and how one’s responses are going to influence that brand, is rarely an emotionally-engaging experience. Most respondents will try to be polite and answer the researcher’s questions with as much precision as the format allows. But the price paid for that singular, pristine encounter is superficiality. After all, from the consumer’s perspective, they are being asked to give advice without knowing who they’re talking to or why the research sponsor is asking. Think about how much more focused your own book or restaurant recommendations or parenting tips are when you know who you’re advising - better still, when you have a reciprocal, long-term relationship with them - and the reasons why more bilateral research methods can yield higher quality insights become obvious.

While episodic survey research can be relatively inexpensive and provide solid feedback, there are also tremendous benefits to be derived from longitudinal work. Does the sauce taste as good on the fifth use as it did on the first use? Do patients take their medications as religiously once the prescription bottle turns cloudy? What triggers attrition from one product or trial with another? Is today’s message more compelling than the one we came up with last week? These are answers that can only be effectively derived from understanding how consumers’ attitudes and usage evolve over time, from iterating with the same group as you develop and refine products and messaging.

Of course long-term relationship with the same individuals understandably raises concerns about positive bias. But innovation research tells us that it is the lead users - the ones most knowledgeable and passionate about a given brand or category - who often generate the ideas and fuel the organizational will to change and improve products. And research conducted by Communispace Corporation across 15 of its CPG communities and over 2,300 members suggests that:

  • Community participation leads members to feel heard by the sponsoring brand.
  • As a result of feeling heard, they feel a greater affinity with the brand.
  • That greater affinity results in more candor, not less. 

Based on both content-coding (counting instances of negative feedback, positive feedback and rationale for both) and on concept ratings, the research indicated that community members were actually slightly more critical of individual concepts being tested. As with close friends or family members, their candor increased with their emotional investment in the brand’s success.

By the same token, that ongoing relationship with the sponsoring brand makes private communities a poor choice for testing brand awareness or even elasticity.

Infusion of inspiration

Stan Sthanunathan, vice president - marketing strategy and insights at Coca-Cola and co-chair of the Advertising Research Foundation’s Online Research Quality Council, noted in an interview with Robert Bain in the October 22, 2009 issue of research, “People are too focused on probability and non-probability samples, people are too focused on respondent engagement - this is all about making minor changes to what we are doing right now. Those are all necessary, but they’re not sufficient conditions for the success of the [market research] function ... The clients are saying, ‘Inspire me, help me to take some transformational action’ and we’re busy creating better mousetraps.”

Multiple, and sometimes blended, 21st-century market research techniques can help the industry move beyond validation and provide that infusion of inspiration. Scaling the transparency traditionally associated with qualitative or ethnographic work can yield engaged, motivated participants who generate higher-quality insights. Leveraging online and mobile technologies enables research in more naturalistic settings that feel safe, maximize participant comfort and encourage intimacy. And above all, coming out from behind the glass and building real relationships with “respondents” leads to a deeper knowledge of research participants as real people, enabling researchers to feel greater confidence that they know and can trust them.

That seems worth the trade-offs, doesn’t it?