Quality in, quality out

Editor’s note: Ashley Grace is group president at ARSgroup, an Evansville, Ind., research firm. Ron Conlin is a professor of business at Pepperdine University, Malibu, Calif.

A heated debate among marketers is whether marketing and advertising spending should be decreased during an economic downturn. Although the size of a marketing budget and the difficulty in accurately accounting for its effect on business makes it a tempting area to cut, research from past recessions has consistently shown this to be counterproductive.

A better perspective to take is that the recession itself offers marketing opportunities, as it often allows advertisers to negotiate lower ad rates and, with fewer competitors advertising, media clutter is reduced and share-of-voice increases - enabling hard-earned brand equity positions to be protected and market share to be won. However, this sort of success doesn’t happen by chance - it is most often the result of a committed approach to ensuring marketing decisions are founded in quality consumer research.

The concept of quality has been the keyword in business for the last 25 years. The battle for customers during the ’80s and ’90s was often fought around reliability and durability - with concepts like those promoted by Juran and Deming and movements such as Six Sigma, lean and total quality management. Industries such as automotive have successfully focused on quality and the results have been dramatic. Given the huge new-product failure rate (estimated at 85 percent), and significant waste in advertising spending, it is time for the research community to drive new-product research quality, especially in these tough economic times.

The job won’t be easy: critical marketing issues remain even as research budgets are being slashed. So how are researchers dealing with this? Research suppliers are citing a shift away from descriptive and predictive quantitative research to less-expensive exploratory qualitative research, often with the same research objectives in mind. Given the lack of projectability associated with qualitative research like focus groups, this is an alarming trend.

In addition, there are growing concerns about sampling methods. Probability sampling techniques were the norm with telephone random-digit dialing before do-not-call lists, caller ID and widespread use of answering machines. Now, with the dramatic emergence of Internet data collection, given huge reductions in cost, probability sampling has been thrown out the window by many. Today’s reality of shrinking budgets causes researchers to focus on the cheapest sources of online samples, often giving little attention to the sample source’s projectability.

Given the pressures associated with reduced budgets, it appears that the market research industry’s efforts to drive quality, validity and reliability are headed in the wrong direction - and that is bad for business.

Maximize the return

It has never been more critical to maximize the return on every advertising dollar, yet relatively little has been published on how marketers can maximize marketing impact during a financial decline. Instead of accepting the cliché that half of their ad budget is wasted, many top marketers are ensuring that all media spending has a positive return by using a quality consumer research program. These positive returns come in the form of increased equity and sales volume and the capturing of share-of-market from competitors.

If one can truly remove the uncertainty of the notoriously un-measurable advertising expenditure, why hasn’t marketing research been called upon more broadly to deliver this sort of quality decision-making guidance? Somewhere along the way, marketing research lost respect as a function and with it, a seat at the marketing decision-making table in many firms. In a push to reduce research budgets and to provide faster turnaround, clients forced research agencies to compromise quality, which, in essence, led to a false commoditization. In a quest to save money and time, marketers began to assume that research measures were comparable across agencies: as long as the data collection technique appeared on the surface to be the same, many believed that persuasion was persuasion, recall was recall, liking was liking, etc., regardless of the underlying processes employed.

Even some of world’s most respected marketers aren’t immune from the problem. Consider the following from Kim Dedeker, global consumer and market knowledge director at Procter & Gamble:

“There are many examples I could share of what can happen when research quality is compromised. Instead, I’d like to tell a story about the real pain for P&G. It’s something that we’ve seen time and time again across businesses and across geographies. It’s when we field a concept test that identifies a strong concept. Then our [consumer and market knowledge] manager recommends that the brand put resources behind it. The marketing, R&D and research teams all invest countless hours and make a huge investment in further developing the product and the copy. Then later, closer to launch, we field concept and use tests and get disappointing results. And rather than finding an issue with the product, we find that the concept was no good. We realize that the data we’d been basing our decisions on was flawed from the start. This is the part that is so hard for our brands and costly for our businesses. We have to find the data/insights that convey the true voice of our consumer to provide sound consulting to our businesses.”

Source: Research Business Report (October 2006)

P&G is not alone. Many top marketers are now recognizing the sometimes painful adage that all that glitters isn’t gold and are realizing that the business costs in dollars, time and lost opportunity far outweigh the investment in assaying research quality from the start. As Philip B. Crosby postulates in his book Quality is Free - The Art of Making Quality Certain, managing quality as a key driver of business success generally yields savings from eliminated rework - easily paying out directly for the cost of improved quality - and improves performance going forward due to reinforcing trust in existing systems and processes.

Industry focus

How can quality decision-making be assured while staying within budget? There is a lot of industry focus now on establishing “quality online research” standards - with firms like Capital One, Coca-Cola, Unilever, General Motors, Kraft, Bayer and P&G leading the way - and this is an important effort. However, before we talk about the quality of a particular fielding technique, it is of paramount importance to establish the basic fundamentals associated with having a quality research program.

As a guide to thinking about quality and as a reference for marketers, ARSgroup created an eight-part research quality checklist to provide marketers with an advantage as they begin to navigate difficult economic terrain in their advertising decision-making. While the application may vary, this checklist can be used to ensure a foundation is in place to deliver accurate marketing decision support, regardless of whether the data is collected online, in central location or via phone.

1. Objective: The business direction is not subject to personal opinion. Pre-testing historian Darrell Lucas has postulated: “Testing, in itself, is a reflection on the judgments of creative people. However, they are likely to be the first to endorse a test which confirms their own judgment.” Marketers need to have decision criteria which clearly articulate the voice of the consumer and eliminate the effect of their personal opinion. To make this approach successful, the measurements, and the corresponding decisions, must focus on the consumer. As stated by John Philip Jones in Getting It Right the First Time, “The effectiveness of advertising suffers when decisions regarding copy strategy and execution are driven by advertiser/agency committees, politics and ‘liking.’”

2. Relevant: Results address specific, pending actions. The metrics used in testing must be relevant to the objectives of the specific ad being tested. Some ads are meant to inform, others to remind and others to persuade. At times the advertiser is trying to increase consumption by current users. If the advertising measurement does not align with these specific business objectives, it will probably be of little use in the marketer’s pending business actions. At the same time it is also important to recognize that the ultimate objective of advertising is a contribution to financial performance. Whatever the immediate objective, there is a need to link actions to financial performance.

3. Timely: Results are available before decisions are made. In this fast-paced consumer world, it is vital that advertising decision-making tools are available to marketers when they need them. Pre-testing implies that measurements are taken before decisions need to be made. The data required must be available before the campaign is launched, not after. While “post” data may help marketers discern why a campaign or an ad failed or succeeded, it is much more cost-effective to spend the extra $20,000 before making a $100-million mistake.

4. Simple: Results are easy to adopt and act upon. Advertising research often brings with it a degree of complexity that makes the results difficult to understand and even harder to use. However, the best metrics have a clear interpretation related to business results. It should not require advanced statistical knowledge or a think-tank committee to make an advertising decision; marketers need simple decision-making tools that tell them with surety how to act in a given business situation. Simplicity is best achieved when key performance indicators can be directly tied to actual business performance - and when diagnostic results are empirically shown to improve the end outcome.

5. Reliable: Measurement results can be replicated. In their 1982 “Consensus Credo Representing the Views of Leading American Advertising Agencies,” the PACT (positioning, advertising, copy testing) agencies asserted: “A copy testing system ... should yield the same results each time that the advertising is tested. ... Tests in which external variables are not held constant will probably yield unreliable results.” In the 27 years since this statement was published, the rules of statistical measurement have not changed: the reliability of any measurement system should not be assumed but rather assessed and managed on an ongoing basis. Lesser reliability reduces the confidence in a measure because lower reliability, by definition, means lower sensitivity and greater error. While sampling variability imposes known limits on the reliability of all sampling-based measures, the presence of “other” error variance decreases reliability. To make sure that “other” error is minimal, reliability is determined by the difference between test results and later retests of the identical advertisement. Your testing provider should maintain a diligent and ongoing test-retest program to ensure that results are replicable over time and are as reliable as the laws of random sampling allow. And, they should openly publish these findings.

6. Sensitive: Representative consumers of appropriate sample sizes. The job of marketing research is to objectively translate the voice of the consumer into the language of business. To do this, the target consumer must be accessible via the collection technique utilized (phone, central location, online). For example, it is important to recognize that some demographic groups are more highly represented online and more likely to respond to requests to participate in online research. This makes it critical that there be a well-designed screening mechanism to assure a representative sample. Additionally, professional respondents must be eliminated from the sample to ensure an accurate representation of consumer behavior, and respondent representativeness must be balanced to account for known characteristics of target consumers.

Regarding sample size, a measurement system must be able to accurately detect meaningful business differences and must reflect the risk of the business decision (more risk, higher sample). As stated above, beyond pure size, the samples should use consumer respondents who have been recruited and qualified for research participation. A sensitive advertising measurement is one that is able to detect meaningful differences among alternative ads, allows for accurate projections of in-market results and ensures precise planning of media expenditures. Results obtained from low sample sizes should be interpreted with caution and used only for diagnosis.

7. Validated and calibrated: Proven to accurately predict business outcomes. Advertising research efforts should be targeted toward identifying valid measurements that predict advertising effects: awareness, share-of -market and consumption, among others. But it is not enough that the measurement is valid, it must be validated (i.e., proven through an ongoing validation program to measure what it purports to measure). There are many measures of intermediate marketing outcomes that need to be validated, but in the long term all of these outcomes need to be linked to and validated against financial performance. Like reliability and sensitivity, individual measures can be higher or lower in terms of validity. Higher validity makes for better decisions. On the other hand, a combination of moderate validity, moderate reliability and small sample size can make a measure so insensitive as to be useless.

ARSgroup has used the “current” post-market measurement technology to explore the relationship between advertising pre-market measurements and post-market sales results. The evidence from these tests, which has been audited by independent parties, suggests that quality measurements are capable of predicting sales effects with an accuracy rate of up to 90 percent. As brands and their corporations become increasingly global, measurements must account for differences across brands, conditions, cultures and regions. Yet, while methods may need to vary, the advertiser should be able to interpret the research results in such a way that their relationship to in-market results is universal. Global research standards ensure that a company’s global marketing teams are all speaking the same measurement language.

8. Transparent: The system holds up to independent audits. Due to issues of client confidentiality and security, not all data collected by research agencies can be open to public scrutiny. However, clients should be able to get “inside the black box” to examine all raw and aggregated data collected for their brands as well as explore published blinded, cross-customer meta-analyses. Most importantly, the data should hold up to independent and unbiased third-party scrutiny and audits. The bottom line: Marketers should hold their research agencies accountable and demand to see the proof.

Not sexy

To most executives, the subject of quality in research is not sexy or strategically interesting but it is critical during this tough economic era and cannot be overlooked. Shareholders want smart, efficient expenditures from the companies in which they invest. Marketers want higher-order direction from their research agencies so they can do more with less. Research agencies want a strategic seat at the marketing decision table to solidify their client relationships.

The reality that quality research is the key to achieving all of these objectives is illustrated by Michael Harvey, global consumer planning and research director for Diageo: “So my message is clear to our [research] agencies ... until you can get the basics of conducting and analyzing a market research survey right, please don’t ask us to trust your judgment on how we might resolve our business issues.”

Advertising is always an important component of a marketing program. According to the article, “Making a Recession Work for You” featured in American Business Media, “When times are good, you should advertise; when times are bad, you must advertise.” But do it smartly! It’s 2009 and there are quality tools, technology and systems which can dramatically increase your overall marketing ROI and finally bury that crazy “half the money I spend on advertising is wasted” proverb.