As anyone who has ever shopped knows, consumer brand preference can change in the blink of an eye. You might be a regular user of Brand X, but if Brand Y’s box promises a better deal (“20% more free!” or “50 cents off right now!”) even the most loyal consumer would probably rather switch than fight.

One way for manufacturers to prevent such desertion (or cause it with buyers of the competition), is pre-testing of promotions. Pre-testing allows a company to determine which in-store campaigns will win the hearts and minds of consumers before they do battle on the store shelves. One company that has had success with pre-testing is the Block Drug Co., maker of the Polident line of denture cleansers. With the research help of Oxtoby-Smith Inc., Block has used the process to maximize the impact of Polident campaigns.

“The Polident business is a very competitive business, almost a commodity in nature,” says Charles Schrank, Polident product manager. “One of the ways that we try to differentiate ourselves from the competition is promotion. We sell a great deal of our product in consumer identified deal packs (that offer rebates, sweepstakes, and other premiums). During the course of the year we’ll run eight to ten different deals, programs or offers, and they get to be fairly expensive, so we’re always looking to evaluate these programs before we implement them.”

Because these promotions typically aren’t supported by major media advertising (their primary purpose, Schrank says, is to “create a point of difference versus the competition”), the offer itself and the package graphics are what sell the product, so they must reach out and grab the potential customer.

Block uses a two-phase procedure to pre-test its promotions. In one-an-one mall interviews, small samples of consumers are shown several examples of promotional concepts and are asked to choose which they like best. Those promotions that seem worth testing further are then made into mock-ups for a mail survey that asks consumers to evaluate the promotions on purchase interest and overall appeal.

While this method provides valuable information on the potential of a promotion, recent refinements in the process have yielded an even more effective method that Block began using last year, now offered by Oxtoby-Smith Inc. as a standardized concept called Pro-Sort.

Ranking vs. rating

A key component of Pro-Sort is the use of ranking instead of rating. Rating asks the respondents to assign a score to each of the items concerned, using, for example, a number from one to ten. This gives an indication of how consumers feel, but relying solely on ratings can present problems, says David Smith, vice president, marketing for Oxtoby-Smith Inc.

“There are some consumers who, when asked to rate a product on a one-to-ten scale, will never score anything higher than a two, and there are those who will never score anything lower than an eight. They either love everything or they hate everything. In any case, you end up with distorted findings that don’t accurately predict consumer behavior in the marketplace.”

Though ranking avoids this—asking the respondent to rank their choices in a definite order, thus giving an explicit idea of preference—it doesn’t indicate the difference between those choices. “A consumer’s first and second choices can be a mile apart or virtually indistinguishable,” Smith says. “Rank orders in themselves don’t tell you that.” This critical information is what the Pro-Sort methodology provides.

Computer program

Using a computer program designed by Dr. Richard Maisel, a professor of statistics at New York University, the Pro-Sort methodology translates each promotion’s rank order score into a quantitative scale value, then indexes it against a control promotion, which is assigned a value of 100. The use of a previously successful promotion as a control—what Smith calls a “gold standard”—permits clients to see not only how well each of the test promotions scored relative to each other, but more importantly, relative to a promotion that has already proven itself in the marketplace.

This kind of information, says Smith, allows a company to implement the promotions that have the best chance of capturing consumer attention.

“You have a marketing environment increasingly filled with promotional clutter,” Smith says. “As consumers are overwhelmed by the proliferation of promotional offers, it is becoming more difficult to elicit positive consumer response. This test permits marketers and sales promotion executives to determine before engaging in a campaign which one of their alternative promotions is likely to be most successful.”

Bonus pack

For its control promotion, Schrank says, Block usually uses a bonus pack offer that gives the consumer extra product for the regular price. These promotions draw strong support from retailers, who recognize their popularity with consumers.

“Again, being in a commodity type business, where price is critically important,” Schrank says, “a promotion which has a lot of immediate value-added impact does well. The consumer clearly perceives it as a great value. They don’t have to send away for anything and it’s something that everyone can use, versus another offer that may only be of selective interest to a certain group. So it’s got universal appeal.”

Preliminary phase

For the preliminary, one-on-one phase, respondents (typically 50-plus year old denture wearers who use one of the major denture cleanser brands) are asked their brand preference up front and then they are shown several (between 15 and 20) ideas on concept boards. In addition to being asked to rank order them, they are asked “If the concept they liked the best appeared on a box of Polident, would they be interested in buying it?”

Because the respondents are asked beforehand about their brand preference, Schrank says, this phase of the research gives Block “a sense of whether there is any crossover from the competitive brands and you also see how well it does with the Polident users.”

For the mail survey, the top promotional ideas are made into mock-up boxes that look exactly as they would on store shelves. They are placed on a master sheet, approximately two feet by two feet, which has pictures of eight identical boxes of Polident on it, each bearing a different graphics flag for the promotions (“free denture bath,” “save 50 cents,” “win a free trip to Hawaii”). Under each of the boxes is a two-line description of the major elements of that deal or promotion.

This master sheet, along with a cover letter and instructions, is mailed to consumers who have indicated in a survey that they use denture cleansers (either Polident or a competitor) or have returned a card that is randomly included in boxes of Polident which Block uses for research purposes. As an incentive to return the survey promptly, the respondents are given a chance to win cash prizes in a drawing.

Along with providing demographic data, they are asked to name the promotion they find most appealing and tell why, listing primary and secondary reasons, doing the same for the promotion they found least appealing. All items are ranked based on uniqueness, order of appeal, and purchase intent.

Analysis

The responses are sent to Oxtoby-Smith for analysis, who then supplies Block with results. “Oxtoby provides us with a rank order of the promotions tested and then the groupings of those promotions (by score) and how one group might differ in purchase behavior versus another,” Schrank says.

“The Pro-Sort methodology is a much more discriminating methodology that enables us to look at, for example, the top five or six ideas and see whether there are any meaningful statistical differences between these ideas. And that’s something we were never really able to do before,” Schrank says.

For example, a promotion may rank number two, but this doesn’t mean it ranks just below number one; there could be a great difference between them. Before Pro-Sort, Schrank says, he couldn’t be sure if he was getting the cream of the crop or if he was just selecting from a group of unsatisfactory concepts.

“(Pro-Sort) really did start to differentiate whether these were good ideas or bad ideas, relative to the control, and it provides a more powerful analysis than what we’ve been able to get in past studies. It not only measures appeal and uniqueness but it more effectively coordinates two scores against purchase intent. Between those three values it provides more linear scaling; it measures the distance between the ranks. It’s one thing to say ‘This was number one, and this was number ten,’ but it’s a lot more effective for us to know that number one was an acceptable situation and that (those promotions) from number two on were clearly unacceptable. “

An example of this came from the recent research, which showed only one of the ideas to be satisfactory. There was a great deal of difference in approval between the promotion at the top of the list and those below it.

“This research indicated that number one was the only promotion that met the criteria of being equivalent to the control; the other items fell far below it. While they may have been numbers two, three, and four, they were really unsatisfactory.”

Refining the process

Armed with information on which promotions work and which don’t, Schrank says, Block can use time and money more efficiently, by refining the process of creating and implementing promotions. “As time goes on we’re going to definitely use this not only to evaluate ideas after they’ve been developed but as a key criterion in the development of new ideas.”

For example, in the coming year Schrank says, Block plans to share the research data with its promotion houses, letting them know which ideas work and which don’t. “We’ll go to them and say ‘You can distill out of this which programs continue to come up toward the top, and based on that, those are the types of programs that we’d like you to come back to us with, and here are the ones that have typically tested (poorly), so don’t bring those back,’“ he says.

In addition to assessing which promotions have the best chance of working, it also lets the company justify expenditures. The programs typically cost between $75,000 and $500,000, and if an inexpensive promotion scored as well as an expensive one, Schrank can go with the less expensive choice and save money. Also, if an expensive campaign tests well, he can justify the expenditure to management, knowing that the test results indicate the program will get a good response and is worth the money.

“I can select what we hope are meaningful ideas that are less expensive to execute then our best promotion and be reasonably assured that their impact in the marketplace will be as good as that control promotion.”