Editor’s note: Rod Antilla is a founding partner and principal in Action Based Research, Bath Township, Ohio. Brian F. Blake is a senior consultant to Action Based Research and is a member of the Consumer-Industrial Research Program at Cleveland State University.

Dawn doesn’t like what she is hearing. Recently hired as an analyst in the market research department at a large advertising agency, she listens as her new colleagues complain about the agency’s account managers and other executives with client-liaison responsibilities.

One of them recounts a tale of woe in which the executives belittled the researcher’s studies for being “off base.” Research, the execs have said, is too descriptive and not enough prescriptive. That is, the studies describe the market and its buyers but do not tell the company how to take action. Increasingly, Dawn learns, the agency’s execs have come to view the research department as irrelevant. For their part, the researchers now feel that the execs collectively have the IQ of a turnip, for they seem to lack the ability to understand what is right in front of their eyes. This is not good!

Different worlds

Dawn is fictional, but the situation she faces is only too real for many, many market research analysts. This chasm between the two camps exists because execs and researchers often live in different worlds, with different roles and priorities, at least according to their formal job descriptions and according to the organizational chart on the office wall. To do their job effectively, researchers must come to understand why this chasm exists and must take steps to bridge the gap.

The execs’ role in the agency is to make decisions. Ideally, to oversimplify somewhat, the exec looks to a research study; to reveal which alternative decisions exist (“I can make decision X, Y or Z”); to evaluate the likely success of each decision option (“Decision X has a high probability of having a big payoff for us while Decision Y has a low chance of success; Decision Z has a high chance of being successful but a modest payoff even if it is successful”); and to compare decision options X, Y and Z in order to find the most workable one (“Compared to Y and Z, Decision X has the best chance of success and the highest payoff if successful”).

In contrast, typically researchers see their mission as describing the market, reflecting their training and the qualitative and quantitative tools available to them (e.g., in survey design, statistical analyses). Researchers usually set out to document current purchase behaviors, explore positioning themes or uncover distinctive demographic and attitudinal characteristics of target segments, etc. From this essentially descriptive information, researchers may suggest implications for action (“The target audience finds this theme most appealing, and so possibly it is a good positioning theme to use”) or they may content themselves with simply describing the current market situation (“Your customer base draws heavily upon middle-class, educated females”) or reactions of consumers (“They like product concept A more than B”).

While researchers can adopt a decision-making perspective, and sometimes do so, usually their focus is on providing descriptive information in a timely and cost-effective manner. Researchers may focus on description for good reasons. For example, they realize that when the managers make a particular decision they may need to take into account factors unknown to the researchers, such as an upcoming change in the company’s product manufacturing process or new contractual obligations to distributors. Or perhaps the researchers’ descriptive approach springs from past negative reactions of execs (“Don’t try to make a decision for me! Just give me the info and I will make the decision!”).

Bridge the gap

Still, our point is that researchers will often be seen as more valuable to an organization when they bridge the gap and translate their findings from researchers’ descriptions into the executive’s framework of decision selection. Going the extra mile in data analysis and report presentation by adding prescriptive elements to the descriptive components can pay off.

Let’s go back to Dawn. One of the clients of her agency is a restaurant chain. Previously, the agency employed a positioning strategy focused upon the chain’s good prices. Now, agency discussions with the client and several industry-wide studies have suggested that current patrons of this type of restaurant are more responsive to themes of convenience, quality and uniqueness. The client turns to the agency for guidance and asks whether the chain should stick with its price focus or should shift to one of the three alternative themes. Accordingly, the client has approved an agency-directed positioning study to help answer the question. It is now the job of the exec heading the agency’s account team to use this research to make a decision: choose a strategy based on pricing, convenience, quality or uniqueness.

The agency’s researchers realize that generating a mass of numbers about the market can bring this fact-finding research effort to a grinding halt. The information must be integrated, simplified and focused upon the decision at hand. As much as possible, the various measures of a company’s success should be boiled down and combined into a few yardsticks or criteria, and the decision options compared on these yardsticks. (This topic was covered in the article “Four indicators, one goal” in the October 2007 issue of Quirk’s. See Web link at the end of this story.)

Accordingly, the researchers sit down with the client and the agency account team and come up with the principal yardstick that would be used to compare the four decision options. In our story the key criterion is: Which positioning theme presents the most appealing picture in the eyes of those ready to dine at a restaurant in the geographical region served by the client’s chain?

The research team then sets out on the study. Here, Dawn makes a proposal to her fellow researchers that potentially can change the value of the upcoming study in the eyes of the execs and can, hopefully, over time improve the sour relationship between execs and researchers at the agency. Based on her prior experience, she suggests that they take four steps. She illustrates the approach with a few simple statistics.

Step one: Use measures of decision factors that are valid, cost-effective and able to be communicated to execs.

A key decision factor is the relative appeal of the four themes. The responsiveness of consumers to the four themes can be gauged by more in-depth but fairly complex and expensive measures, such as conjoint or discrete choice modeling. Or it can be done using rating scales and direct questions (“Relatively how important to you are…”) that are quicker and easier, though their validity can be compromised by a variety of distortions. What is needed, Dawn notes, is an index they can confidently assume identifies which of the four themes can most effectively attract a given consumer to that restaurant. For reasons of cost and ease of communication, she would prefer an analysis that involves simple statistics rather than a fairly complex mathematical model drawn from the decision sciences.

She shows her colleagues the table above. Column B is the percent of consumers who are drawn most strongly to restaurants with that feature (theme). For example, the first row shows that 30 percent of restaurant visitors in the target market are drawn by cost considerations (column B) more than by convenience, quality or uniqueness. Another 20 percent (see the second row in column B) are attracted more by convenience than by the other three items. Column C is the percent of each group in A that is ready to actually patronize a restaurant they find attractive. Of those 30 percent (attracted most by cost), two-thirds (67 percent, column C) are ready to go to a restaurant they find appealing rather than eating at home or making other plans.

Step two:Cast the analysis from the perspective of a decision’s success or failure.

The theme in column A can be presented as a description of the theme in question, but it can just as rightly be phrased as a decision alternative for an exec. Column D can be understood as the proportion of the total market that has both characteristics (attracted and ready). It is calculated simply by multiplying column A by column B. But more appropriately, it is also the probability of a particular theme meeting the client’s yardstick of success. In this example, an exec has the highest chance of success by using the uniqueness theme; its probability of success, 25 percent, is somewhat higher than the 20 percent probability provided by the current cost-focused campaign theme.

Step three:Present a numerical index of how a decision choice will increase or decrease the exec’s odds of being successful.

Column E, Dawn confesses, is really for the eyes of her fellow researchers only and shows the ratios used to compute column F, the column to be shown to the exec. Column E is the odds ratio readily found in basic statistics books and is the odds of one decision alternative divided by the odds of another decision alternative. It can be calculated by dividing the odds of success with one decision (choosing uniqueness) by the odds of success with a second, baseline alternative (following the present approach, cost).

First, we get the odds of success with the uniqueness decision by dividing the probability of success (.25) by the chance of non-success (.75), giving us .33. Then we get the odds of success with the cost option by dividing the probability of its success (.20) by the probability of its non-success (.80), giving us .25. Next, we divide the odds of success with the uniqueness option (.33) by the odds of the cost option (.25), revealing an odds ratio of 1.32. In other words, the odds of being successful with the uniqueness approach are 32 percent better than are the odds of success with the cost option. This is a measure of change in the odds of success going from the denominator of the ratio (the baseline cost theme currently used) to the numerator of the ratio (the decision being considered, uniqueness). Thus, if the manager were to shift from a campaign emphasizing cost to one stressing uniqueness, the manager increases the odds of being successful by 32 percent. This is a big jump in most situations!

Step four: Put it into English.

Column F, presented to the execs, is the expression “in English” of how much a given decision increases or decreases one’s odds of being successful against the criterion in question. A number above 1.00 is an increase and a number below 1.00 is a decrease in the odds of success compared to the baseline condition. Here, deciding to choose the uniqueness theme yields a 32 percent increase in the odds of being successful over the current situation’s odds of success.

Doing the same calculations for the quality option yields a change ratio of .33. This is a drop from 1.00. So the quality option gives an odds of success 67 percent less than the odds with cost. Again, this is a major drop in the odds of success!

Dawn’s proposed four-step approach has merit. It speaks the language of the execs - which decision most likely will be successful - and does so without the researchers’ encroaching on their territory. It is quick to calculate and easy to communicate. It focuses the execs’ attention on the decision criteria and thus can reduce the chance of the execs being distracted or overwhelmed by masses of descriptive statistics.

Translate their facts

The numerical example here is simple, but this basic approach can accommodate complex decision factors as well. The key point, as our story of Dawn indicates, is that researchers can translate their facts and figures into the language of decisions. The study can be presented in terms of the likely success or failure of specific options. This is what execs want to hear about. This is what they will appreciate.