Taking the reins

Editor’s note: Don Gloeckler is senior manager, external scaled solutions, consumer and market knowledge at Cincinnati-based Procter & Gamble..

One of the best aspects of working in the consumer and market knowledge (CMK) organization at Procter & Gamble is that the impact of our consumer insights is reflected in just about every major decision that the company makes. This is because we believe that we exist to serve consumers - to touch and improve lives.

The challenge associated with this is that we must be vigilant to ensure that the insights we deliver meet the high standards required to support our far-reaching business objectives. We know that research quality benefits not only our company but also our industry and, most importantly, the consumers we serve worldwide. To meet that quality standard, we need to be confident that accurate data are always at the foundation of our recommendations, whether those data are collected face-to-face, via mail, over the telephone or online.

This has been our goal over P&G’s decade-long history with online research. Having done more online quantitative research than many companies in the world, we have certainly had our share of illuminating experiences, sometimes finding work that was poorly done. The challenges we faced in our years of experience with online research required us to take an active role in pursuing online research data quality. In 2009, we developed online research quality principles and requirements to deliver consistent, reliable online research across all of P&G and with our suppliers. In this article, we want to provide an account of why and how we did it, explain why we believe online research quality guidelines are important to the industry and how market research buyers can drive that change to benefit everyone - including our suppliers.

Experiences were less-than-perfect

Let’s start with why P&G’s CMK organization took the quality requirements path. It begins with our experience with online research. Some of our experiences were less-than-perfect, including test/re-test inconsistencies, illogical research results and even a product test that delayed a major product launch. We discovered firsthand the difficulties that a lack of guidelines can cause to major business decisions.

The Advertising Research Foundation’s Foundations of Quality initiative pointed to issues with the sample we, and other research buyers, rely on for conducting online market research surveys. The ARF fielded a landmark survey across 17 online panels, a telephone panel and a mail panel. That research showed that there is 41 percent e-mail address overlap across panels, which means that sample that comes from multiple providers could result in the same panelists being invited to a survey more than once. The study also found that longer surveys increase nearly six times the likelihood of undesirable survey-taking behavior such as speeding and straightlining.

There are a number of industry reports that question online data quality but P&G’s own learning was sufficient to drive us to action. Our experience with online research made it clear that two of the largest contributors to data quality are the sample and the survey instrument. This article largely focuses on our work with sample quality.

Complicated by a few factors

It sounded simple enough; we needed a solution that would consistently deliver high-quality sample and ensure well-designed survey instruments. This need was complicated by a few factors.

First, online sample and survey instruments frequently are not controlled by a single entity; they are the shared responsibility between clients and suppliers (and the suppliers’ technologies for execution).

Second, each supplier has its own approach to data quality - some of which are inefficient (post-survey data cleaning), reactive (data weighting) or not visible to the client (no reliably auditable metrics demonstrating quality). As a whole, these approaches are inconsistent across suppliers.

The bottom line was that P&G lacked a consistent, cross-supplier set of online quality expectations and solutions.

Today, P&G has online data quality specifications and delivery standards built into our research allocation process. The path to get there involved defining online data quality, developing expectations for how we expect suppliers to deliver on our definition of quality and addressing both internal and supplier challenges.

Here are the highlights of that journey.

What online data quality means

Using our sample and survey instrument quality learning for guidance, we first defined what online data quality means for P&G sample and surveys.

What would constitute “high-quality online sample” and alleviate the issues we confronted? Our experience indicated that online sample must include only respondents who: are real people whose identity and location can be authenticated; are qualified to answer the survey based on screening and behavioral criteria we determine; only take each survey once; and answer questions thoughtfully.

With respect to the survey instrument, we found that poorly-designed, complex surveys encouraged undesirable survey-taking behavior (straightlining, speeding, etc.). We needed a way to systematically predict, measure and benchmark the performance of our survey instruments for their impact on respondent engagement so that we could improve them and prevent poor surveys from ever being fielded.

Needed objective measures

Simply asking our suppliers to deliver our definition of online sample and survey quality was insufficient to ensure that our results would be replicable - across time, sample sources, technology platforms or geographies. We needed objective measures to indicate that the requirements were met; we needed a process to ensure all projects were using the same quality requirements; and we needed a mechanism to make certain our suppliers uniformly applied our requirements.

With these challenges in mind, we developed requirements for how we expect our suppliers to consistently and transparently deliver on our definition of online sample and survey instrument quality. Specifically, we established that suppliers’ online data quality solutions must:

  • use objective quality criteria that are predetermined, replicable and standardized;
  • rely on automated processes to meet quality requirements;
  • ensure that potentially fraudulent respondents cannot easily identify or circumvent the quality measures in place;
  • uniformly apply quality requirements to all projects when requested, regardless of sample source, survey technology and geography;
  • deliver reports demonstrating the impact of applying the quality requirements; and
  • protect and secure all personally identifiable and confidential information collected from respondents, suppliers and/or clients.

These requirements supported replicable results and objective, auditable measures of compliance with our quality criteria that, importantly, allow us to compare across suppliers. With our requirements in hand, we had to put a process in place so that they were consistently followed throughout our organization. Our goal was to ensure all projects were using the same quality requirements.

We institutionalized our requirements by building them into our research purchasing process. We got internal buy-in that only suppliers capable of delivering our quality standards should be considered for research allocations and that meeting our requirements would be a key criterion on the list of supplier priorities.

We felt strongly that we could work with our core suppliers to deliver our requirements because they were supported by empirical evidence, feasible for the suppliers to implement and verifiable. With ongoing communication, collaboration and executive support we found our suppliers were willing to adopt third-party data quality technologies to meet our requirements.

Faced challenges

The process of institutionalizing online research quality requirements may sound straightforward in its recounting. However, we faced challenges in the development and implementation of our online research quality expectations. Researchers were concerned that results might not be comparable with previous research. Both research managers and panel suppliers were worried about fulfillment due to concerns about potentially high rates of respondent failure on sample quality criteria. Research managers and suppliers shared an anxiety that our requirements would slow our research. Finally, nearly every stakeholder was concerned about the cost implications.

These challenges to online quality standardization were not insurmountable. We ran pilot studies and, where necessary, we conducted parallel tests and examined any differences to evaluate the impact of the requirements on comparability. We rejected the notion that fulfilling a project with questionable respondents should be an option in research execution. In the end, the respondent rejection criteria did not slow us down. Additionally, our suppliers are now in an even better position to tout the quality of their panelists.

The automation required in the requirements and the real-time rejection of fraudulent or duplicate respondents has the potential to reduce the time spent on post-field manual data cleaning, ultimately decreasing the time spent on projects.

We were not willing to sacrifice quality for questionable data. We recognized that ensuring better data quality could cost a little more and we were willing to manage that fact.

Reaping the benefits

Today, our online quality requirements are in effect and we are reaping the benefits of their implementation. We evaluate suppliers and select them for projects based on our quality criteria. We’re confident that there is a consistent set of quality standards being deployed across our suppliers. In addition, we have greater confidence that we get reliable and accurate research results. We’ve gained valuable visibility into our suppliers’ sample quality practices through auditing reports and we are confident that we are reducing costs associated with re-fielding surveys and over-sampling due to incidents of poor-quality data. Most importantly, we are more confident that the decisions our company makes are based upon valid insights that accurately reflect the needs of the consumers we serve.

Our approach to online quality standardization came down to this: We developed our requirements based on our experience and empirical evidence; garnered the requisite internal buy-in; worked with our suppliers, who came to understand the rationale of the standards and delivered on them; built our requirements into our purchasing process; and now we receive reports verifying that our standards are met for each project. As a result, we have come a long way toward bolstering our confidence that we have accurate data and solid recommendations that enable us to better serve our consumers around the world.

Drive change

P&G’s story of online sample quality guidelines demonstrates that it is possible for research buyers to drive change in the quality of the research we buy. Knowing now what is possible, our research team strongly encourages other research buyers to adopt their own guidelines to foster broader quality improvement. As more buyers take the reins of online research quality, greater strides can be taken toward rooting out low-quality data, which will benefit the entire industry.

You may be asking yourself, “If P&G has standards in place and feels confident that it is getting reliable, accurate research, why does it care what other companies do?” The answer is that without this kind of strong quality effort, the credibility of our companies - and the credibility of the research industry at large - will suffer. A lack of quality expectations fosters a “race to the bottom” on costs, too. Poor quality is bad for everyone.

We believe that overall online data quality will improve if more buyers insist on their own objective, consistent and verifiable measures of quality. As consumers of online market research, we believe each company needs to establish thresholds for what it considers an acceptable level of quality. Anything less than these thresholds should be considered unacceptable and suppliers will not want to sell it.

Create a new base

Today, research suppliers each take their own approach to quality and there is no consistent and transparent way for buyers to assess quality across suppliers. Research clients must create a new base for what is considered acceptable - a minimal expectation of what is considered useful research. If research buyers set quality standards, overall industry research quality will increase.

Hopefully, P&G’s experience with online quality requirements has made clear the buyers’ incentives to adopt guidelines. Buyers who adopt quality expectations can readily evaluate suppliers, using a common set of criteria, be confident that common guidelines are being used across all their projects and feel secure they are receiving reliable research whose quality can be verified through auditable measures.

Research suppliers also benefit by broad adoption of guidelines. David Haynes, CEO of Opinionology, explained the quality premium this way in his August 2010 Quirk’s article (“Are Internet access panels a lemon market?”) on Internet access panels: If suppliers can demonstrate objective measures of data quality, research buyers may be more inclined to pay a premium for better quality. Suppliers will be able to certify their quality after the sale through auditable metrics. Second, if buyers are using an objective set of measures to validate quality, then suppliers can more efficiently implement a standard process to meet and deliver on those measures. If there is broad adoption of guidelines for quality, then each client won’t require a custom approach. Suppliers will also be able to more efficiently address quality concerns with templatized RFP responses that address quality. Ultimately, the sales and delivery process becomes more efficient and suppliers will get a fair market price for quality.

The end result? Research buyers will be able to select suppliers by a measure other than price. Those suppliers who can deliver quality and demonstrate it will be rewarded. Providers of lower-quality data will be identified and will pay the price in lost market share. Online data quality should improve - industry-wide.

Have become active

At P&G, we feel so strongly about online research quality that we have become active in the development of guidelines that could be used industry-wide. We have been participating in industry forums that advocate improvements in online research quality, including the Advertising Research Foundation’s Online Research Quality Council and the TrueSample Quality Council. The latter organization includes market research industry leaders, suppliers and Fortune 500 buyers who share the common goal of markedly improving online sample and research quality. Collectively, 14 companies are involved, including representatives from General Mills, MarketTools, Microsoft, Nestlé, Opinionology, Research Now and Samsung Electronics, among others.

In late 2010, the diverse members of the TrueSample Quality Council worked together to issue a set of online consumer research quality guidelines that built upon requirements that P&G and Microsoft presented at the Forrester Marketing Forum earlier that year. While the guidelines are rooted in P&G’s and Microsoft’s quality requirements, the Council did a great deal of work to modify them for broad adoption. (See http://marketing.markettools.com/rs/markettools/images/MarketToolsOnlineConsumerResearchQualityGuidelines.pdf.) Ultimately, the Council collectively endorsed a set of quality guidelines that: can be readily used to determine whether suppliers meet quality conditions; are easily implemented in the research buying process by inserting into RFPs or statements of work; can be met by a variety of suppliers; and will ensure consistency and reliability in research.

Our goal in participating in the development of the online consumer research quality guidelines was to make it easier for other research buyers to be proactive on online research quality. We want more buyers to be in a position to encourage the industry toward better-quality data. With these guidelines, buyers will have alternative means to selecting suppliers, beyond price or simple claims of quality. When buyers select suppliers based on the verifiable quality of their product, data quality will improve across the industry.

Take the reins

So what can you do? You can take the reins of research quality today. You can take advantage of the RFP-ready guidelines that are already available. You can start including them in your buying process and awarding projects to suppliers who meet them. You can start requiring that your own quality standards are being met for your research. You can feel confident that you are getting high-quality data. You can support elevating research quality for the entire industry.

Broad adoption of online research quality guidelines can change the client/supplier relationship for mutual benefit. Buyers will be able to evaluate suppliers based on the quality of their product in addition to price and speed. Suppliers can build their business by ensuring the delivery of consistent online research quality. Ultimately, variability in the quality of online data will be diminished. Online research data will be better, for everyone, and all of us will know that because we will have the reports to prove it.