Motherhood issues

If a quality or feature is rated the same for all brands, its importance in brand choice is nil, no matter how important people say it is. Features or qualities everyone deems "very important" are sometimes called "motherhood issues." They can influence the choice between classes of solutions, like oil heat versus gas heat, adding staff versus new equipment, fish versus meat for dinner. This might be called the generic decision stage, which is often subliminal. Once that stage is complete, motherhood issues play no role in the brand decision, since non-qualifiers have already been eliminated. Many surveys waste a good deal of time and effort reconfirming that motherhood issues are indeed motherhood issues.

Threats

Research will usually prove that somebody was wrong. Therefore it's a threat to someone - possibly you.

Objectivity

Researchers pride themselves on their objectivity; an impartial observer might wonder. Consider what happens when a survey result turns out to be so different from expectations that we're sure the client would question it. We may go to great lengths to recheck, even re-interview, and review procedures and codes. Do we expend the same efforts on results that come in as expected? Couldn't they be just as wrong?

To reduce that threat insist on using the newest, most advanced methods in the next survey. It makes you look progressive and up to date; it also prevents embarrassing historical comparisons, since the new survey data will not be comparable to previous ones.

Arrested development

New technologies tend to ape traditional formats until they become well enough established to create their own; early automobiles looked like buggies for years. Telephone interviewing is well established, but questionnaires still ape the format of personal and mail questionnaires, often quite inappropriately. It is high time for telephone researchers to develop questioning formats that take into account the strengths and weaknesses of the non-visual, aural communication medium. There are some technology innovations, but the basic questioning approach remains mired in routine, ignoring limitations like respondent attention and memory, which are more serious in this context.

Research and measurement

Much of what we call marketing research is not research at all, but measurement. Our cultural bias favors measurement because it provides numbers, symbols we associate with scientific, rational, orderly processes; the numbers tell us how much, how often, how many. Research, on the other hand, tells us how and why, soft information that can't be used in equations and is despised by bean counters.

But our measurements often measure things that are only crude approximations of what we really want to measure, things that sound reasonable and that we actually can measure easily. Audience ratings are an example. A perverse effect of commercially successful measurement is a gradual shift in the target of the optimization effort, from the actual desired effect to the measure that supposedly reflects it. At a large ad agency where I once worked, print ads were carefully crafted to achieve high Starch ratings, rather than to help sell the product.

We can improve the utility of measurement by improving the validity of what we actually measure. That's what research is for, and we don't do enough of it.

How to buy research

Competitive bidding on standardized specifications may be all right for some routine measurement jobs, but not for research. Before you award a research contract to a low bidder you must determine what you are not going to get - and be sure you don't need it.

That may not be as easy as it sounds. The best way to do it is define what you really want to know, regardless of feasibility. From that point, you can define what you expect to learn from the best possible survey, and what is practical in terms of time, money and inherent feasibility. Establishing these benchmarks will almost certainly entail a re-examination of the presumed information need; you might realize that even the best possible survey may not provide information that is sufficiently reliable; the survey approach has limitations that are not always recognized.

Most likely, though, clarifying these issues will show the importance of getting the researcher involved from the beginning of the planning stage, before any budget is set or specs are written. Management's failure to do so lies at the root of much of our wasteful survey work.

Job insurance

Most middle managers are well aware of a special research benefit: A bad decision supported by a study is far less threatening to job security than the same decision without a research backing - and the employer pays for the insurance.

Plus ça change

"Survey research often falls short of the careful design and methodical execution implied by the word 'research'; too many surveys are merely crude measurements of variables believed but not proved to be relevant to a given problem. The status of survey research is reflected in management's reluctance to spend on it sums anywhere near those spent on product development."

That quote is from an article in Media/scope, April 1965. I wouldn't change a word of it today.

Significance

Researchers love to quote statistical significance and confidence limits. I am 90 percent confident that these are misunderstood or misinterpreted 90 percent of the time. If these are shown anywhere other than in a footnote or an appendix, I suspect a snow job unless the limitations of the survey results are prominently spelled out on the same page.

In most cases, biases caused by question wording, memory lapses and nonresponse are far more serious sources of error than random sampling probability; but we have no measure for them, leaving the random-sampling error statement as the only numerical indicator. If we cannot assign a numerical value to something, we tend to assume it is zero. This may be stupid but we can't help it - we've been trained to react to numerical statements.

A subtler problem with these statistical statements is they invariably assume a test against a null hypothesis, even if the hypothesis makes no sense whatever from a business perspective. "Top-of-mind awareness of our brand name has risen from 11 percent before the start of the campaign to 14.5 percent after the second month, an increase significant at the 95 percent confidence level." Bully! But what was the target? Of all the before-and-after studies I have seen in the past couple of decades, only a handful used samples large enough to provide the statistical power needed to assess whether the management target had been met. To understand statistical power, you may have to consult a textbook. But it's real and may be more important than the routine confidence limits.

Choose your proverb

Information users - whether the information is based on research surveys or another source - should have a two-sided sign on their desks. One side reads: "A little learning is a dangerous thing"; the other says: "Half a loaf is better than none." I tend to favor the latter; the danger in the former can be minimized with a bit of informed skepticism, a healthy attitude for researchers and research users.

Inertia

A trap for the unwary brand manager is to mistake inertia for brand loyalty. Inertia among your brand users is your area of potential vulnerability, the sleeping dog your competitors will not let lie. Customer satisfaction surveys must distinguish active, explicit brand loyalty from passive, accepting inertia. It's not easy.