Build the reconnaissance principle into research design

Editor’s note: Brian F. Blake is a senior consultant to Action Based Research, Akron, Ohio, and is director of the consumer industrial research program at Cleveland State University. Rod Antilla is president of Action Based Research.

“We want actionable results!” demand executives sponsoring market research projects. Quite rightly they want research that helps them decide what actions to take to reach their objectives. As a result, a number of useful principles or guidelines have been proposed by seasoned researchers as ways to enhance a study’s actionability. These ideas have focused on a project’s ability to help the executives make the right decision. Based on our 40-plus years of combined experience, though, we feel that too little attention has been paid to the other side of the coin, the role of the research in executing the right decision.

We think of these execution strategies as the reconnaissance principle. For research to result in action, the researcher needs to work with the manager (client, sponsor) to detect any obstacles within the company that can stand in the way of an objective review of the data. If roadblocks appear likely, we suggest the researcher work with the manager to devise a procedure to overcome the problems.

More specifically, some previous suggestions point out that actionability requires that the research address the real issues. In the classic 1985 paper “Backward Market Research” in the Harvard Business Review, Alan Andreasen noted that researchers must first consider the particular marketing decisions to be made and what alternative choices are possible. Then from this understanding the researcher should work backward to design the sample, the questionnaire, etc. The bottom line is that researchers must know what the results of the study will be used for before they can know how to design the research correctly.

Other suggestions note the importance of ensuring that the right people are involved from the initial phase of the project. For example, in the article “6 Steps During Initiation Critical to Efficacy” in the January 20, 2003 issue of Marketing News, Karole Friemann flags the need to select the right people for the research team. These appropriate personnel include both those who actually make the pertinent decisions and those who execute the chosen actions.

Still a third guideline is to insist on the study’s reporting the results in clear, exciting style. A good example is Martin Horn’s comment in an article in the Fall 2002 issue of the Association for Consumer Research News, “Research with Legs.” He instructs researchers to focus the report around critical insights and to create an interesting, tightly woven story that the audience will avidly follow.

Though clearly valuable in their own right, these suggestions need to go further. Let us illustrate the reconnaissance principle with an all-too-frequent scenario, one that is particularly challenging for market researchers.

The situation

An external research vendor (the “researcher”) is called in by a product manager at a consumer electronics firm to do a consumer survey to determine the demand for the various features of a home theater product. The manager is being pressed by the sales staff to develop a new product that is easy to set up, low in price and maintenance-free. The technical staff, though, insists that consumers want a product that has good sound, durability and flexible applications. The technical staffers believe that buyers will trade off a higher price and increased complexity of set-up and maintenance in order to get these benefits. The sales and the technical staffs are increasingly clashing.

What can the manager and the researcher do to ensure that the study is getting an objective review by staff and contribute to the company’s action plans?

Implementing the reconnaissance principle

Try an eight-step process.

Step 1. To finalize the survey design the researcher works with the manager to identify pitfalls they may encounter when making known the results of the buyer survey. In our experience, these pitfalls often are:

  • The staff whose anticipations are not supported by the survey can become vociferous critics of the survey methodologically (and maybe of the researcher!).
  • The staff may become judgmental rather than analytical when reviewing study results. Instead of assessing why respondents feel the way they do, we may hear the staff say, “Those stupid respondents don’t know what they are talking about!”
  • The “unsupported” staff may fear suffering loss of face when it appears to others in the company that they do not have a feel for their market’s preferences.

Step 2. Before collecting the data the researcher/manager convenes a task force composed of representatives of the numerous staff groups involved in making or in implementing decisions about the home entertainment center. The warring sales and technical staffs are not the only groups brought in. The task force then reviews the questionnaire and survey design. The goal is to have the task force commit to the procedures and so reduce the chances of later criticism of the study methodology.

Step 3. The survey is launched. For illustrative purposes, let’s say that customer preferences for a variety of product features are measured by a simple rating on a 10-point scale for the appeal of the feature.

Step 4. The researcher asks the staff groups (preferably the bulk of the members of these groups and not just the task force members) to answer the survey as they think typical buyers would answer. These projections would include the stated preferences of buyers for the product features.

Why do this? Three reasons. First, it provides the data for the mapping in Steps 5 and 6. Second, it encourages the staff to put themselves in the shoes of the consumers and pushes them to think analytically about buyer reactions. Finally, it helps the researcher focus the later presentation of the survey results on issues that are unanticipated by the staff rather than upon boringly detailed information the staff already knows.

It’s critical that the researcher stress that staff projections will be grouped together and that no individual’s projections will be made public. After all, the researcher would not want to make the staff feel defensive at this request for predictions!

Step 5. The researcher analyzes the data for a three-phase presentation. The first phase (Step 6) will address internal differences of opinion and encourage all staff groups to see the study results as a company-wide win-win situation. Stage 7, the next presentation phase, will show the results of an integrated staff, and Step 8 is the “real” analysis, the one that would have been conducted if there had not been additional steps taken to overcome internal dissension.

To prepare for the first presentation phase (Step 6), the researcher calculates the predictions of each staff group. Next, respondents who match the predictions are culled from the data. For example, respondents are selected that match the predictions of the sales staff, i.e., respondents with strong preferences for ease of set-up, low price, and no maintenance. These selected respondents are grouped together, are labeled Sector 1, and their personal profiles are drawn from the data. In this case, Sector 1 is found to be mainly high-volume buyers of electronics who are current customers of the firm, middle-aged and affluent.

The same procedure is followed for the technical staff. Its matching respondent group, labeled Sector 2, is found to be mainly young, highly educated and professional. Then the matching respondent sectors and the sector profiles are computed for the other staff groups (e.g., communications, distribution).

A graphic display is then prepared to show the preference of each staff group and the matching sectors. While there are a wide variety of possible presentational formats, we like to use a preference map generated by multidimensional scaling. The SPSS and SAS statistical packages have handy programs to do this. A good non-technical description of this scaling approach is in the 1996 book by James Myers, Segmentation and Positioning for Strategic Marketing Decisions. Whatever presentational format is used, however, the display should show simply and clearly that the staff groups differ in their projections and that each staff group is “in tune” with consumers (at least with some of them).

The Phase 1 map shows the results for the technical and sales staffs. For simplicity, the projections and the matching sectors of the other staff groups are not shown.

In the preference map the locations of the points and the distances among them are mathematically estimated from the rating data by the multidimensional scaling routine. The red dot represents a group’s ideal point, i.e., the type of product that the group considers most appealing. The closer a product feature (indicated by a yellow triangle) is to an ideal point, the more appealing that feature is to the group in question. In the Phase 1 Map, Sector 2 respondents prefer good sound and durability over easy set-up and no maintenance.

The closer together the ideal points are, the more do the groups’ preferences agree. So the ideal points of Sector 1 respondents and sales staff projections of Sector 1 are fairly close together, showing that the preferences of Sector 1 respondents have been predicted by the sales staff. As anticipated, sales and technical staff projections are quite different.

Step 6. The research results are presented to the task force. The Phase 1 map demonstrates to the sales staff that, indeed, they are correct in feeling that a substantial number of consumers demand easy set-up, no maintenance, and low price. The map also shows that, indeed, the technical staff is correct in feeling that a substantial number of consumers want durability, good sound and flexible operations.

The researcher points out that the high-volume, middle-aged, affluent buyers (Sector 1) are grabbing the attention of the sales staff. The young professional market (Sector 2) is making its views known to the technical staff. The researcher concludes that the anticipations of the sales and technical staffs differ but are each realistic - and that certainly no group can be accused of being out of touch with the market!

Step 7. Next in the presentation the Phase 2 map displays the predictions of combined company staff, demonstrating that by integrating the views of all staff groups the company has its finger on the pulse of the market as a whole. The map reveals that the market as a whole wants good sound, low price and durability.

Step 8. The researcher commends all staffers for their sensitivity to market nuances, reiterates that the combined staff group has an accurate picture of the overall market demand, and then launches into the strategically meaningful analysis.

Actionable and appreciated

In summary, this case is a good example of the reconnaissance principle. The first seven steps render the groups more amenable to an open-minded analysis of survey data, less ready to defensively attack the study (and the researcher), and more prone to taking the actions shown in the Step 8 analysis to be warranted by market conditions.

So following the reconnaissance principle is not only a way of avoiding obstacles, it is also a form of team-building as well as a means of developing internal consensus around the company’s strategic actions. Planning ahead to effectively address internal obstacles pays off by yielding research that is actionable…and appreciated.