Counting the lessons learned

Editor's note: James K. Callihan is a Seattle-based independent researcher.

As research vendors, we want our clients to value our contribution. After all, our livelihoods depend on it. Occasionally, the drive to demonstrate our particular “value-add” gets blended with a certain overzealousness. Individual projects morph into exquisite opportunities to impress our clients, as if, once and for all, with the singular depth of insight or analytical rigor we imagine is ours alone, we will all but overwhelm our clients with the magnificence of our value.

Or so we imagine. This type of project is already fraught with potential dangers and those perils are magnified tenfold when the vendor in question is also trying to use the project as a way to get back into the client’s good graces after some earlier missteps.

Couple this situation with some inexperienced players and you have the potential for severe disappointment and dissatisfaction, if not disaster, as I will chronicle below.

Can be daunting

Most clients have a list of vendors sanctioned by their organizations. If you’re fortunate enough to be on it, the task is to stay on it. If you’re not on the list, getting on it can be daunting, as clients typically have little motivation to reach outside their comfort zone of sanctioned vendors to engage one that is untested and therefore unproven.

Perhaps the only task more daunting is climbing back onto a vendor list after being knocked off it. Vendors on a sanctioned list quickly understand its benefits and thus they strive to achieve go-to provider status. This road is a long one. Success requires a substantial history of project experiences that consistently meet the client’s needs, however explicit or subtle those needs happen to be. With each successful project, client confidence grows. Trust is soon to follow.

Concentrating so much energy on a single project to propel a vendor to some exalted status generally fails to achieve a goal that is, after all, exaggerated to begin with. The notion that a single project is imbued with such dramatic potential takes the vendor down a path that confuses complexity with sophistication. Imagining some awe-inspiring end product, great energy and intellect is invested to expand a project, broaden its objectives and deepen its analytical rigor. Mesmerized by the glistening brass ring that seems so close at hand, little thought is given to what can go wrong and, perhaps more to the point, what stands to be lost if things do go wrong.

Fortunately most of us know better and many of us have had the good fortune of working with clients who are reasonably sophisticated researchers. They know better too. So while the temptation may be present, more often than not projects take merely the shape that’s needed to attend to their objectives while taking into consideration the parameters set by budgets and timelines. Excess is avoided. In part because it’s expensive, in part because it adds a complexity that can threaten a timeline but most commonly because it’s recognized as simply unnecessary. Enough, in the vast majority of cases, really is enough.

Completely enjoyable

Those of us who have been privileged to work with clients who are sophisticated researchers know how enjoyable it can be. But it’s a far different undertaking when the client is inexperienced in the ways of market research. It's more challenging and more difficult. In discussions with sophisticated clients, technical terms are used with an ease bred from knowing that all those at the table understand them. Nuance and subtlety can be employed.

Not so with clients who haven’t been around our particular block more than a time or two. Here, extreme care is required to ensure we’re not talking over or past the client. Vigilance is needed to spot those subtle cues that the client isn’t tracking the conversation, is puzzled or uncertain. Tact is essential as we don the hats of both educator and advisor. Speaking clearly is perhaps a tad more important than listening carefully, since it’s here where we’re tasked with both selecting the route and explaining why it’s likely to be the most judicious, most productive and most useful of the available options. And it’s here, when we find ourselves working with clients who aren’t deeply experienced in the ways of research, where the risk of doing too much – of going overboard – is perhaps greatest.

The situation becomes exponentially more precarious when the vendor enters discussions with inexperienced clients with a private sense that here is that wonderful opportunity to leapfrog what is otherwise a path of measured steps and incremental progress. It’s not necessarily the case that the vendor sees the chance to take advantage of the inexperience that sits across the table. It’s a more subtle process. Having been on the vendor side of a couple projects that were ill-conceived in their overzealousness, I can say that the vendor team imagines its work to be spot-on and, all the more, is fully certain their design and analytic roadmap are exquisitely well-suited to the client’s needs – whether the client realizes it or not.

A perfect storm

The most recent of these ill-fated projects was, fairly speaking, the most disastrous. A perfect storm of sorts brought together a vendor whose status with this particular client had been seriously weakened and a client project manager who was not only new to his job but about to embark on his first market research project. On the one hand was a vendor eager to rebuild a crumbling relationship, while on the other was a bright young neophyte equally eager to make sure his first project provided maximum insight and guidance to his team of managers and product developers.

It all began innocently enough. The first meeting for this business-to-business research project involved only a senior manager from the client’s organization and two of us representing the vendor. After the customary pleasantries, the senior manager provided a detailed overview of what she needed from the project. On the whole it was, as she said, a “very straightforward” project. As the discussion progressed it was clear we were talking about a simple feature-prioritization study and that a max-diff exercise was likely to be the most appropriate technique. The respondent population was clearly framed and it was generally agreed that accessing the appropriate sample wouldn’t be difficult. The timeline, however, was critical. The senior manager needed the data to provide statistically-robust guidance to her product developers. Their task of designing the product’s next-generation features would begin within two months and the market data was needed to build consensus among them as to which of a long list of possible enhancements would prove most compelling to customers. As a project that would help us regain our footing with a terribly important client, it couldn’t have been nicer.

The proposal stage went well. Indeed it should have, since there was nothing overly complicated about the project. Once finalized, the proposal was sent on-schedule to the senior manager who, despite the tight timeline, took much longer than we expected to review it. So long, in fact, that we found ourselves fearing that the project had been given to another vendor. Our frustration was palpable. As every step had been taken to submit a thoughtful proposal with a competitive budget, we began fearing that our company’s recent shortcomings were now blocking our efforts to begin anew. Quickly imagining that every proposal would meet the same end, we visited the possibility – more aptly, the probability – that the relationship with this client was beyond repair.

Remained confident

But we were wrong. Very late one Friday evening we received an e-mail from the senior manager congratulating us on being awarded the project. She acknowledged that the timeline would have to be revisited due to how long she had taken to make a decision but she remained confident that the findings could still be delivered in time to provide direction to the designers. Allaying our fears, she also remarked that she was “excited” about working with us and added that she was “looking forward to highly actionable results.” It was in this e-mail that we learned that the day-to-day management of the project would be handed over to a more junior member of her team.

Thus the stage was set. For reasons beyond my comprehension, the simple design we had set forth in our proposal was cast aside without compunction, as if nothing more than mere scribbles on a cocktail napkin intended only to get our cerebral juices flowing. No sooner had we been awarded the project than our team began replacing its straightforward design with one vastly more complex, based on the assumption that a more sophisticated design was needed if we were going to demonstrate the value we brought to the table. We would protect the timeline and, of course, the budget. But otherwise ours was the “obligation” to evolve the simple design into something more elaborate in order to obtain findings that would prove more broadly beneficial. In the end we’d be rewarded with a strengthened relationship and, surely, additional project opportunities.

Nothing went as planned

Suffice it to say that nothing went quite as planned. During her initial review of our proposal, the senior manager e-mailed to ask if we could increase the n from 400 to as many as 800. It was important, she said, to make sure the findings carried as much statistical validity as possible since her product developers tended to be highly intelligent engineers who would require very robust data to be convinced as to which specific features they should focus on. It was her thought that an n of 800 would surely do the trick. Through a bit of back-and-forth, we suggested – and she agreed – that an n of 600 would suffice. We made, and she accepted, changes in the budget to cover increasing the sample size. But by the time the project’s design had been finalized, a single targeted sample of 600 was transformed into three audiences of 200 respondents each. Meetings with the client’s inexperienced project manager were effective in convincing him that an original audience n of 200 would be sufficiently robust. Since this was the sample that would be doing the feature prioritization, it was important for him to know this. It no longer mattered that the senior manager earlier had recommended an n of 800. Far fewer, he was told, would be enough.

The fact that three audiences were required reflects the dramatic changes that had been made to the project’s design. The multiple audiences would enhance the study by providing insights relative to competitive positioning, market sizing and sundry other objectives that were there simply to be addressed. Now it was possible to effectively conduct three interconnected surveys not just one. No doubt the rewards would be commensurate with the added effort. But the more complex design created quite a bit of extra work. While a portion of it involved only our internal team’s time and effort, a very sizeable portion fell on the shoulders of the young and inexperienced project manager. Now he was required to populate all sorts of lists, provide official definitions for an array of terms and provide succinct descriptions for nearly three-dozen features on the list to be prioritized. Had the sample been confined to the original audience little of all this would have been required of him.

Given that he’d been in his job for less than a year, his progress required enlisting input from others in his organization who were already extremely busy and quite fully engaged. Delay ensued. Instead of a study that would require six weeks from start to finish, it had become a study that was more than six weeks off-schedule by the time it went to field.

Quotas didn’t fill

Things would only get worse. Fielding wouldn’t go nearly as well as predicted. While the original audience quota of 200 filled quite rapidly, the others didn’t. In fact, the pace was so slow that qualifiers were relaxed not once or twice but three times. And still the quotas didn’t fill. Corresponding adjustments in the project’s timeline were made. When all was said and done, the project that began with a tight and critical timeline was delivered two-and-a-half months late.

By then, things had gotten much worse. For the first time in a 20-year career I was dismayed when the young project manager informed us that due to the delays, the project’s primary objective – prioritizing a list of product features – had been pushed off the table. Decisions had to be made and the product designers could no longer wait for the results of the research. Their efforts had to get underway if their schedule was to be met. Missing it simply wasn’t an option. Consequently, after all the time and effort the project had become – from the client’s perspective – irrelevant.

It was all the more disconcerting to hear the young project manager take responsibility for the delays. In some respects, he was right. He didn’t deliver what he promised in a way that could even remotely be considered on-time. It’s likely that he found it far more difficult than he imagined getting others to provide him with the input he couldn’t do without. But it was clear he didn’t understand that his workload had been increased by a factor of three or four due to changes our team made to the design. To him this was simply how the process unfolds and he was duly remorseful that he had slowed things down.

And yes, things got even worse. In a following up with the senior manager it was learned that she too was distressed by the project’s progression. Unfortunately, but I suppose appropriately, she placed the responsibility squarely on her own shoulders. It was her mistake, she commented, to have left the project in the hands of an “inexperienced” project manager. Had she done her job correctly, she added, she would have stayed more closely connected with the project if only to ensure things didn’t go sideways. This, she maintained, was “perhaps the most critical failure.” As she noted, “There’s not much that happened that wouldn’t have been rectified by my own greater involvement.”

Blind enthusiasm

Of course, she was right. But not. In the rarefied air of organizational theory the proverbial buck always stops at the top, so from that perspective she certainly was correct. But from any practical perspective, her failing – such as it was – was that she trusted that we would embrace the project as she had framed it and use our skills and expertise to move it toward a speedy and fruitful completion. We did neither. Guided by some blind enthusiasm our team transformed a simple project into one of impressive complexity based solely on the presumption that doing so would deliver far more than the client imagined possible.

Unable to rein in our own eagerness, there was no one on the client side capable of doing so – especially absent her direct involvement. The young project manager certainly couldn’t. As his first market research project, how was he to know that our more complicated design implied a very substantial risk to an otherwise tight timeline, especially, I should add, given our team’s assurances to the contrary? It was only when the timeline actually began shifting that evidence of our “miscalculation” surfaced but by then he too had become wrapped up in the promise of a project that was going to produce such farsighted insight across an expanse of topics. Like the senior manager, he too had trusted our expertise and capabilities. He didn’t – he couldn’t – evaluate the project’s roadmap to assess its efficacy. It’s likely it didn’t occur to him that he needed to, what with the trust and all.

And just when you think it couldn’t get any worse, it did. As it turned out, a good number of the features the product designers had decided to focus on hadn’t found their way to the top in the prioritization exercise. Some actually were near the bottom. I can’t say what predicament this caused the designers, if any. It’s easy to surmise that more than a modicum of time was spent reconciling the differences between what they thought were the most compelling new features and what the data showed. But it’s also possible that the designers refused to be distracted by a study that probably had lost all credibility by the time it was delivered 10 weeks late. Perhaps it was a disguised blessing that the n had been reduced as it had.

Soundly dashed

By now the dust had settled. Needless to say, the vendor’s hope of resuscitating the relationship was soundly dashed. As to the young project manager, we can only hope that his career wasn’t cut short by his management of a project that, as I’ve noted, was deemed irrelevant before it was even completed. I suspect it wasn’t. The senior manager’s acknowledgement of her own “failing” probably protected him from anything more distressing than a very unpleasant debriefing. That his first experience with a market research vendor has tainted his appreciation of their collective value seems entirely probable. No doubt he’ll engage his second research project with a far more critical – and far less trusting – eye. That’s unfortunate.

Clearly, this is an extreme example of all that can go wrong when a research team becomes overzealous in its efforts to impress the client. It’s something, I’d imagine, of a worst-case scenario. But it speaks to two things. The more obvious is the downside of working too hard to do too much. Our team’s leadership genuinely believed that the more complex design was infinitely practical and completely doable. Perhaps on some scale it was. Oddly, our failing was rooted in an almost altruistic sense that because it was possible to do more, it was our professional obligation to do more. Anything less would not only be unprofessional but beneath us and the expertise we had to offer.

Which leads to the second thing. As the project’s design became more and more complex it was thought of in terms of being more and more sophisticated and it’s likely that this was the underlying current that was pushing the project further and further off course. Unwilling to confine ourselves to doing something simple, our research team of four Ph.D.s and over 80 years of combined experience worked doggedly to construct a design we could be proud of, as if its complexity was a direct reflection of the breadth of our methodological prowess.

Lamborghini capabilities

It was, all in all, a process that invested the team’s effort more in the task of impressing itself than in attending to the client’s stated needs. After all, what was the purpose of having so much intellectual horsepower at our disposal if we didn’t use it? With Lamborghini capabilities how could we possibly justify an effort of only Ford Pinto proportions? We couldn’t. If the project was going to satisfy our high standards, if it was to be a true reflection of what we had to offer, it had to be complex – period.

And anyway, the client would be the beneficiary. Or so we imagined. In this instance there’s one particular learning that’s relevant to research vendors and clients alike. As much as it’s something we already know, so too is it something we’d do well to revisit now and then if only to remind ourselves of its importance.

There’s no doubt that keeping things simple is much easier said than done, especially when  facing complex issues. But we sometimes forget that market research is our mechanism at for deconstructing the complexity that surrounds us. Our shared interest in “highly actionable insights,” as the senior manager phrased it, is nothing more than a desire for findings that enable us to sift through a multiplicity of options to find a single something to be done; findings, that is, which are simple and straightforward enough to provide clear direction.

The best path

We would do well, it seems, to remember that the best path to a simple outcome is seldom the most circuitous route. Further, we must ensure that our research designs are no more complex than what’s minimally required to attend to the task at hand. For research clients – especially those with little research experience – this might imply a more constant questioning of proposed designs to gain assurance from their vendors that a simpler approach or technique might not suffice. For vendors, it implies honestly scrutinizing each design consideration in terms of whether it is genuinely required to get the job done and a preference for clarity and simplicity.

When clients and vendors work to ensure projects have taken their simplest form, timelines are likely to be shorter, budgets are likely to be less costly for the client and more profitable for the vendor, analyses are likely to be  sharper, confidence is likely to be higher, and – most importantly – insights are likely to be more actionable.

After all, the loss of a major client is far from the only consequence of unwarranted complexity. It’s merely one of the more extreme.