The democratization of research

Editor’s note: Bill MacElroy is present of Socratic Technologies, Inc., a San Francisco research firm.

Historically, successful new technologies usually have been the result of many similar approaches competing to become the dominant technology. Many times, it is not the “best” technology that becomes the standard, but the one that can win the most hearts and minds in the battle of self-interest and economic gain. Thomas Kuhn (in Structure of Scientific Revolutions) called such intervals of experimentation and trial, “periods of foment.” Foment, referring to the unrest and hyper-competition, not just between physical techniques but also between mindsets struggling to become the “dominant paradigm.”

Several technological revolutions within recent memory have pitted “excellent technology with restrictive distribution” (e.g., Sony Betamax format, Apple operating systems) against “good technology with wide distribution,” (e.g., VHS, IBM). In both cases, it can be argued that the technology that “won” the largest market share was not necessarily the best performer in all categories of the contest, but was the one that gave people the most easily available product at the lowest cost.

A similar period of foment is occurring today in the market research industry, but the nature of the battle seems to be only slowly recognized by many corporate research professionals. This battle is also about low-cost, easily obtainable solutions for conducting marketing research versus high-cost, restrictive options. The two competitors are the corporate research departments and their end-clients: the product decision-makers, newly armed with do-it-yourself (DIY) research software.

For years, professional market researchers have attempted to find ways to expand the role of information gathering within the decision-making process of their firms. The value of research has been extolled in business schools and the marketing discipline since the 1960s. However, the process by which good and reasonable research is conducted, analyzed and disseminated as actionable information has been limited by two very important constraints: time and cost. These two factors have provided a perfect backdrop for a competitive research solution: software that claims to let anyone in the organization do his or her own research, for very little cost and, with the advent of the Internet, in very little time.

Competition in many forms

As internal research professionals look around the corporate landscape, they are beginning to encounter signs of the “competition” in many forms. Departments that are faced with pending decisions are beginning to avoid the traditional route of approaching the MR department, meeting to discuss objectives, waiting for objectives to be transformed into questionnaires, dealing with sample constraints and usually being faced with “large” cost estimates for which fewer-than-required budget dollars have been allocated. Instead, they have found the magic of simply doing it themselves using low-cost, off-the-shelf surveying tools.

Survey tools to perform online polling vary in price and quality, but all have the common allure of giving the individual decision maker the power to “get the job done,” rather than wading through the long and more-expensive corporate research process. Many offer question templates that are presented as insurance against making common research mistakes. Some offer readily available sample sources that can, for a relatively small fee, produce willing respondents to take surveys of all types.

What these tools do not provide, unfortunately, is the education and experience to create good and unbiased questions. As the technology puts the ability to do research into everyone’s hands, it is becoming clear that the aptitude for survey design, sampling strategy and analytical prowess are not as accessible as the survey apparatus.

Within the past three months, I have encountered at least three instances within large, well-run companies of “underground research” being conducted by the most unsuspected departments. These departments, some of which are far removed from day-to-day client contact, are purchasing (or subscribing to) online survey systems and doing research completely without external “interference.”

This leads to several observations about the success of the DIY software in competing with the internal research department.

First, managers are reporting that they are “delighted” with the fact that they can quickly and cheaply collect thousands of interviews on issues that are of importance to them. The staff members who are doing the work report high levels of satisfaction with the ease-of-use and ease-of-learning of several of the more commonly mentioned systems. A majority agrees that the output is “very valuable” and is being used both to support large-budget spending proposals and to take corrective actions based on “customer feedback.”

On the face of it, it would seem like the democratization of research is working to everyone’s delight and satisfaction. That is, until one takes a closer look at the actual output being produced.

Unfortunately, there is a reason why good research takes longer and costs more than quick-and-dirty studies. The first reason can be summed up by the old computing adage: “Garbage in, garbage out.” In the several cases that I’ve examined, basic research design errors were plainly evident ranging from leading and biased question construction to the use of very poor option sets and/or unbalanced scales.

One of these mini surveys dealt with customer satisfaction issues and began with a four-paragraph statement on how hard all the people were trying to do a “great job at meeting every challenge” and that “all of your comments will be reviewed by the people who are trying to make a difference for you!” The choices one could choose to indicate overall satisfaction were as follows:

Extremely Satisfied

Very Satisfied

Satisfied

Somewhat Satisfied

Not Satisfied

The second reason why the success of DIY research may be less beneficial to the company as a whole is the impact of poor research on the public’s impression of the brand. How would it be if someone in the company shipping department decided to run cheap ads that they drew themselves without clearing it with the company’s marketing and communications department? The same type of negative brand-impact occurs when customers get poorly-worded, obviously amateurish surveys delivered through a third-party hosting service with names like Survey Monkey. No offense to Survey Monkey, but it’s just not a name that infuses confidence or resonates well with Fortune 500 brands.

Sampling is of course a major issue even with the most buttoned-up research, but the DIY process promotes quantity as the cure-all for sampling issues. Many allow you to send out thousands and thousands of invitations with no cost implication. The most obvious problem with this scheme is that there are no sampling controls at all. In the several instances I’ve observed, “someone just got a customer list” and sent out as many invites as they had e-mail addresses. Others mentioned buying e-mail lists for “hardly any money at all.” Leaving aside for a moment that the character of the respondents was no doubt questionable, the larger issue, in my opinion, is that they spammed their lists. Nothing endangers a company’s reputation with its constituent public’s heart more than sending spam e-mail and multiple reminders. This point alone should be enough to cause companies to limit or closely control this form of activity.

Finally, and probably most problematic, is that the results from these DIY surveys are being used as justification for rather important decisions. Beyond the obvious problems with the questions themselves, the analytical interpretations of the outcomes are also questionable. And like many “facts,” once research is quoted, no matter how poorly done, it can quickly become accepted as gospel.

Address the issues

Just because there are problems with DIY research and the way it is being misused, that does not spare the professional researcher from the competition’s compelling message. It is not enough to simply show errors in the implementation or to point out the negative impressions being created. The industry professionals must address the time-and-money issues that make DIY an attractive alternative in the first place. Maybe the answer is to provide more education for people who really, really want to do these tasks themselves. Another avenue might be to offer quick reviews of draft questionnaires before they are approved for public release. A third option might be to limit the use of customer e-mail addresses as a sample source without prerequisite review.

But whatever the solution, professional research managers might find it in their own best interests to do a little “research on the state of research” and determine the degree to which DIY systems are providing debatable power to the people.