Where do we go from here?

Editor’s note: Bob Yazbeck is vice president, community methodologies, at Gongos Research, Auburn Hills, Mich.

Marketing research online communities (or MROCs as they’ve come to be called) have reached the point where they should be viewed as a permanent option in the marketing research playbook. No longer a “disruptive innovation,” the basic methodology has proven sound for both marketers and researchers across multiple consumer-facing industries. As with any widely-adopted approach, MROCs continue to evolve. It seems that every time a question is answered, a new one appears. And rightly so. The expectations are higher than ever for what MROCs can, and should, be able to achieve.

In this article, I will recommend approaches for dealing with four emerging issues that are worth addressing from a methodological perspective. While many questions exist around the topic of MROCs, I will focus on the critical ones that are part of our responsibility as researchers to delve into.

Does size impact engagement?

First, a little MROC history lesson. When communities began, they were exclusively qualitative in nature. This was due to the assumption that members needed to be limited to a few hundred in order to maintain member engagement. And, due to technological limitations inherent in early community platforms, activities tended to rely on open-ended questions. Even with this initial approach, it was clear that communities could provide insights beyond traditional qualitative methods.

Quickly, expectations grew for MROCs to provide an even more holistic view of the consumer. The methodology and platform proved itself flexible enough to deliver the best of both worlds - words and numbers. While retaining the richness of information, communities evolved to produce a layer of statistical rigor around results.

But going from a few hundred to a few thousand members requires extra effort to maintain the richness of member engagement - the lifeblood of any community. Dynamic approaches to sustain member engagement include the following:

Assign two site moderators. Use one to focus on “the research” and the other to focus on member engagement. It’s just as important to personally encourage members and empower “host buddies” as it is to deal quickly with unruly members.

Break members into small teams. This can be done on a temporary or permanent basis, to promote teamwork when it comes to co-creating concepts.

Seed “common” areas of the site. Planting conversation starters allows members to congregate around topics of interest and will serve as a catalyst for member-generated discussions.

Create subcommunities. Leveraging economies of scale allows moderator(s) to have unique conversations with targeted members within the community.

All in all, the main advantage of a large-scale community is flexibility. In addition to activating a large quantitative sample, niche samples are ready to respond to targeted issues. This means one community can address the research needs of several functional areas within an organization. Marketing, product development, consumer and/or shopper insights can all have their slice of the community pie.

To brand or not to brand?

One of the first decisions when developing a community is whether to incorporate the client’s brand. It’s tempting to brand a community right at the outset, as MROCs can be powerful research and brand-building tools. However, introducing the brand immediately creates bias, which may limit the type or variety of research conducted in the community. Therefore, careful consideration must be made when deciding if you’re going to brand.

While logic seems to point an either-or approach, there is also a hybrid option. Below are ideal scenarios for each:

An unbranded or blinded community is best for conducting exploratory research, brand and product comparisons, understanding consumer wants and needs, and upstream concept development.

A branded community is needed for understanding brand perceptions, and testing packaging, positioning, point-of-sale, advertisements and other marketing materials. It is also necessary for product placements. Additionally, the co-creation process means members are being asked to think like “outside insiders,” so internalizing the brand is necessary.

Starting unbranded, then revealing the brand allows us to assess consumer wants and needs with no risk of bias, before moving into brand-specific research. It can also provide pre-post measures of the impact of the brand on consumers at different points in time. For this approach to work, a comprehensive research plan that covers the life of the community is essential.

There are two very important items to note when managing a branded community. If members have an established relationship with the brand, like participation in the brand’s loyalty program, the site moderator becomes an extension of the brand and must act accordingly. Otherwise, there could be risk of alienating customers.

The other concern is that of intellectual property rights. Knowing that their brand is exposed, client partners must be protected from any claims on creative rights. This is easily controlled by requiring members to sign an agreement waiving these rights before they can begin participating in the community.

Will conditioning occur with overexposure?

Rightfully so, researchers are concerned that community members may become conditioned from overexposure. This is especially an issue in communities with repetitive activities or a narrow research focus.

Let’s look at an example where members are frequently asked to assess and narrow down large numbers of concepts.  The assumption is that members are less critical with their feedback over time. While this assumption is natural, we have actually found that members become more critical through greater exposure to research in the community environment. When evaluating concepts in a community environment, we typically include a “control” concept to measure the effect of exposure. In doing so, we have found that the scores for the control concept continue to lower over time. Thankfully, we also found that the directional results don’t change - the “winners” remain consistent.

That being said, as researchers we must be able to assess the tipping point when members are no longer considered to be objective. Using the measures below, and citing our concept evaluation example, we can diagnose if overexposure is significantly affecting the research:

Volume and mix of activities. If most or all activities involve concept evaluations or other repetitive activities, there is a high probability that members will become overexposed in as little as three months.

Variety of concepts evaluated. Members who evaluate a greater variety of concepts, or more complex concepts, become overexposed less quickly.

Control concept scores. If scores for control concepts are starting to show a significant decline, this indicates that members have become too critical.

While mixing up activities will prevent overexposure, sometimes adding variety is not possible due to community objectives and client demands. In these situations, more intensive steps need to be implemented. These include the following:

Replace all community members. If bias cannot be addressed through natural turnover, then consider replacing all community members. This is typically done on an annual or bi-annual basis. For example, in a community where members are evaluating concepts weekly, we found it necessary to replace the entire member base at the end of each year. Obviously, this is an expensive and time-consuming course of action, due to recruiting all new members and enduring a ramp-up period of an additional one to two weeks.

Replace deadbeat members. Not only does periodic replacement of inactive members keep response rates high, but it mitigates the impact of potentially overexposed members. This compromise approach can prove effective, as there is no time lost shutting down and clearing out existing members, and instead the member base is actually strengthened.

Implement factoring. This one is a little tricky, but creating a factor that adjusts scores based on the measured changes in responses can be applied to results to normalize scores. While there is no additional time or expense needed to recruit new members, the obvious drawback is that this requires some very careful implementation.

To summarize, there is no standard formula for diagnosing when a community’s member base has been overexposed. But, by periodically assessing the situation, you can predict when a corrective course of action becomes inevitable.

Can mobile communities be representative?

Mobile is the logical extension of the online experience. As communities become mobile, sophisticated apps will allow members to participate in activities through smartphones and other devices. This opens up a world of research possibilities, such as in-the-moment responses, as well as multimedia adding depth to those responses. It’s no wonder that there is an incredible desire to move quickly into this space.

Much like the Internet changed the way data was collected, we need to understand how mobile responses differ from non-mobile responses. Thorough research-on-research is necessary to understand the inherent biases among the current base of mobile respondents. In the interim, beta-testing has shown that compared to the Internet, respondents tend to skew younger and male, with a higher level of income and education.

Thus far, when it comes to the depth and quality of mobile responses we’ve been pleasantly surprised to find that respondents:

  • are providing reasonably thoughtful qualitative responses, although extra coaching is needed;
  • are not using shortened or “texting” language;
  • are more willing to share video and images to support their quantitative or qualitative responses; and
  • enjoy the experience overall.

Despite these initial positive findings, below are accommodations and compromises for conducting research with members who respond via their mobile devices:

  • In general, activities should be more concise because members tend to respond in a more spontaneous, on-the-go manner.
  • Surveys should be shorter (closer to 10 minutes versus 15-20 minutes for Internet).
  • Scales need to be limited to five points or fewer due to limited screen size.
  • Qualitative questions should be simple and straightforward, without multiple supplemental or clarifying questions.

While representativeness is a hot-button issue right now, with the current rate of smartphone adoption in the U.S., it won’t be for long. In fact, it will be a challenge for community platforms to keep pace with mobile technology. More than ever, researchers need to be where consumers are, or they may find themselves missing out on a highly desirable and growing sample.

On the horizon

The most progressive communities today are dramatically different than communities of the recent past. Advances in the methodology are being driven by a healthy mix of technology improvements, platform enhancements, a handful of ambitious thought leaders and growing client demands.

Presently, the following developments are on the horizon of the community marketplace:

Next-generation mobile. Geolocation, barcode scanning and QR code scanning will allow members to participate in MROCs while they are in the moment.

Communities within communities. The demand for quantitative and multiple-targeted samples means MROCs will need to push the boundaries of community sample size.

MROC as an internal omnibus. More client-side researchers view MROCs as a foundation for multiple research initiatives. Other methodological tools, such as focus groups and surveys, are then utilized on an ad-hoc basis. By taking a “build it and they will come” approach, other functional areas within a client organization will leverage this quick, effective and inexpensive way to field research.

In-sourcing. More than ever, MROC providers need to be flexible enough to shift between in-sourced and full-service offerings - and every place in between - as their client partners’ needs change.

Be dynamic

In closing, communities as a research methodology will continue to be dynamic - offering both opportunities and challenges along the way. Keeping pace with change requires researchers to be nimble. As a methodologist dedicated to advancing the health and efficacy of communities, I look forward to tackling new issues and continuing to refresh my research playbook.

The pros and cons of branding a community

Pros:

Brand advocacy. Members can become loyal, passionate and enthusiastic fans of the brand. Consistently, members’ brand scores increase after being a member of a branded community.

Better participation rates. By knowing who is sponsoring the research, participants are more willing to join, as they perceive that like-minded individuals will also be participating.

Potential cost efficiency. Due to brand recognition, MROCs can be easier to recruit, saving money on incentives and sampling costs.

Cons:

Irreversible. Once the MROC is branded, you can’t go back, unless completely starting over with a fresh set of members.

Brand bias. Results may be skewed due to preconceived notions about the brand.

Group think. Since like-minded individuals might be drawn to the brand, members may lack a measurable difference of opinion.

Negative opinions. You have to take the good with the bad; you may hear some unfavorable opinions.

Filtered responses. Due to their affinity with the brand, consumers may not want to risk being removed from the community and so they may hold back their honest responses.

Brand management. Since community moderators are the face of the brand, they must be extra vigilant to ensure the brand’s essence is maintained at all times.