Editor’s note: Jonathan Sorensen is a marketing analyst at Sorensen Associates Inc., a Troutdale, Ore., research firm.

We have all experienced what we might call the project from hell. This is the project that looks very execution-friendly and then, for unforeseen reasons, simply falls apart. Regardless of the anxiety and frustration these problems cause, there are two essential lessons to be learned from projects gone awry:

  • Don’t think failure; understand that reality is speaking to you.
  • When projects go wrong, what you learn is often of equal or greater value than the data you were seeking.

These situations are particularly frustrating because everything looks so good on paper. The client and account staff design the test and it looks highly executable, even to the experienced operational eye. The client is competent and the researcher is competent but the study design assumptions and market reality fail to meet.

Take for example, a study once conducted in Chicago, the city the client had deemed its "best market in the country." The product samples were to be purchased in-store. Two days and 50 store visits later there was no product to be found. In a similar case in Boston, with a different client, the competing test product was finally found - dusty and hidden at the back of the shelves, having been discontinued the year before. In another study, a client had estimated a 30 percent incidence. The reality was a 5 percent incidence. How should we react to these types of situations?

First, there is no sense in assigning blame. In fact, usually no one is at fault. For any number of reasons, the study design can contain misinformation that frustrates execution. Screening specifications for respondents may unwittingly be too restrictive. Either the target population doesn’t exist as conceived, or more important, maybe the client misdefined the target population. Maybe their real market wasn’t who or where they thought it was.

Often, the client may be unaware of its actual product distribution. At the broker’s direction, the manufacturer may ship a product off to a warehouse from which it is sent to a variety of distributors. Through three or four degrees of separation, the manufacturer has little concrete knowledge of who really buys the product.

Also, a miscoding of scanner data can give a product a miscount. The confused scanner data gives an inaccurate market picture to the manufacturer and then the client designs a study based on that picture. So, study assumptions can be flawed regardless of the client or the researcher’s expertise.

It is much more important to realize that when serious execution problems occur, reality is talking to us. Since it is the researcher’s general aim to report market reality, this is extremely valuable information. Thus, though we want to smoothly execute field work, hammering a project through serious problems in order to quickly complete a report does a disservice to the client. It would be like a reporter, sent to interview an eyewitness, who loudly insists on a version of events that contradicts the eyewitness’ own report. To avoid this we should react with sensitivity and curiosity instead of frustration.

Fortunately, barring field service mishaps, execution difficulties are richly informative. Simple and valuable lessons may be learned from messy situations. Consider again the introductory examples. In the Chicago study, it turned out that the Midwest was a major market but with Chicago as the distribution point. Chicago itself was not, per se, a significant market. This information may have been known to some in the company but not to other decision makers who were basing decisions on flawed information.

In the second example, at the time client marketing had initiated the project, the product was judged to be a serious competitor. But as the marketing wheels slowly turned, reality changed. Unknown to marketing, the competing product was pulled. Now marketing and product development were trying to solve a nonexistent problem. Embarrassing information? Maybe. But crucial to effective response.

As for the low incidence rate, this often means a miniscule market, unworthy of serious attention. So, regardless of the mistaken design assumption, it was important for the client to take the true incidence rate into account. In all these cases, the truth shining through the tatters of the project was of equal or greater value than the originally sought-after data.

It takes a lot of sensitivity and curiosity to recognize the truth when it contradicts our assumptions. And reporting that contradiction requires credibility and confidence. But, the willingness to listen to market reality, even when it defies project parameters, will help the researcher develop a more accurate picture of the real market. And, as demonstrated, a more accurate picture of market reality is very valuable to the clientele. To pursue this reality we only have to remember that when a study, carefully designed according to market preconceptions, breaks apart on the rocks of market reality, then that reality is communicating as loudly and clearly as it can. The only failure possible is the failure to listen closely and to learn the truth.