Much has been said and written about the issue of data quality in our industry. Now, it appears, something can finally be done.

As a product of data quality-focused conferences that the company has staged in conjunction with other organizations over the past few years, RFL Communications Inc. - publisher of Research Business Report and its associated newsletters - has released a booklet that compiles five platforms for data quality. (Readers can download the booklet at www.rflonline.com.)

The platforms, which Bob Lederer, founder and president of RFL Communications, affectionately calls data quality for dummies, are intended to be simple, free or nearly-free steps that researchers at client-side firms can take to make sure the data they are getting from their vendors is the best it can be.

In an interview, Lederer is quick to stress that the information in the platforms is not something he or his firm generated. “It’s not as if any of these clients couldn’t have put this together but it would have taken a lot of time and a lot of effort. And they don’t have time because they are busy running their departments,” he says.

Rather, it is the outcome of a meeting held in April 2008 during which over 50 research vendors convened, under the watchful eyes of four client-company researchers, and developed a data quality action plan for client-side researchers. “We gave [the vendors] three criteria before they got to the meeting,” Lederer says. “We wanted them to help us create a checklist of recommended things that clients could do or ask their vendors for that 1) would have an immediate impact, either singly or in aggregate, on data quality; 2) would be easy to implement and 3) would have no or a low cost to implement.”

Lederer says he was surprised and impressed by the enthusiastic response he received when he approached a number of research firms with the idea of arming client-company researchers with a buyer’s guide to data quality.

“Back in 2006 when we started doing work on this topic, the vendors said, ‘Go do it!’ I was afraid they would be upset, that it would appear that we were taking sides. It was just the opposite. They said, ‘We’ve tried to get the attention of the clients that we call on and we just never can, so if you can do it, you will be helping us.’ And I said, ‘How will I be helping you?’ And they said, ‘We will increase our sales and our market shares because by asking about these issues, you get rid of all the bottom-feeders, the companies that sell strictly on price. Those companies won’t be able to answer these questions.’ ”

Specific examples

Looking at ESOMAR’s original 25 Questions to Help Research Buyers of Online Samples (which were recently expanded to 26 questions) as a jumping-off point, Lederer said the attendees of the April meeting felt that any new guidelines should contain more specific examples and that the advice should be separated into examinations of metric- and non-metric-related issues.

Platform one looks at the non-metric issues, such as the processes and the methodologies that the panel company and the research agency follow. For example, how are panel members recruited, are questionnaires tested prior to full-scale fielding, and what is done about respondent satisficing?

“Metrics that matter,” as Lederer and the group termed them, are the focus of platform two. “The clients wanted to know if there were any kinds of statistical information that would be able to give them an immediate sense of calm that there really was some quality behind this research, that certain things had been done or been quantified that would indicate to the client the attention that the vendor paid to the panel and to the research itself,” Lederer says.

In addition to compiling the metrics, Lederer says it was important to define them as well, so that every vendor follows the same guidelines and doesn’t come up with its own way of analyzing things like response cooperation, for example.

Platform three looks at the various traps that can be used to foil fraudulent respondents, though admittedly the information it contains is the most subject to change, thanks to the ingenuity of respondents who seek to subvert the survey process. “When it comes to traps, and setting them in surveys, the research companies have to be very nimble because the people who are gaming surveys very quickly figure out how you are trying to trap them. As a result, we can’t give very many recommendations [in the platform] that will hold true even six months from now but there are new ways that companies are finding to spot these people and it’s important to look at them,” Lederer says.

Platform four focuses on panel transparency, and suggests information that panel providers should be prepared (and willing) to share with clients. Lederer says vendors felt that clients should never accept the answer “I can’t tell you, it’s proprietary” if they ask their panel or research firm for explanations of panel particulars. As the document states, “In the end, many vendor attendees agreed that every research supplier should be prepared to share ‘everything’ that does not involve proprietary intellectual property or a violation of panelist privacy.”

“Vendors can’t tell you every little detail about what they do methodologically, but they can certainly fill you in on most of the details and give you a tremendous comfort level about issues such as recruiting and churn,” Lederer says.

Platform five comprises a series of actions - such as avoiding overlong surveys and rewarding even those respondents who don’t qualify for a survey - which vendors feel client researchers have a direct impact on and can help result in better research and more satisfied respondents.

Not paying attention

So how did the industry get to the point where data quality has become a problem? Lederer cites several factors that have combined, almost perfect storm-like, to put us where we are today: clients not paying attention over a protracted amount of time; vendors assuming data quality responsibility and never being asked to explain their work; entrepreneurs entering the industry and selling entirely on price to a client base unduly focused on price.

With those issues as a backdrop, and with a general lack of awareness of data-quality problems among client-side researchers, Lederer believes that the suggestions in the platforms will gain traction among research clients as word of their existence spreads. “I believe buy-in is occurring and will occur because of a sad reality: there is no data-quality expertise on the client side. Two years ago, we identified about 12 client-side researchers who were doing research-on-research for their own data, thereby earning our designation of ‘expert.’ Today, because of tightening economic pressures on research department operations, that number has shrunk to less than six. And I would only truly vouch for one as being proactive.

“Clients very likely recognize this, but the thing they know for certain is that in the last decade they have abrogated their data quality responsibilities and have chosen to become totally dependent on their vendors to do the detailed dirty work for them, with no questions asked,” he says.

Real depth

One of the reasons the April meeting resulted in so much valuable information is that all assembled agreed that the time for more talk was over, Lederer says. “I told the vendors that we were not going to leave the room with pablum-type answers. There has to be some real depth here. From the beginning, our meetings have been held with the goal of achieving something. Education and awareness-building are wonderful but in the end we had to come out with a plan or a goal or some action orientation. There has already been a lot of talk. We were committed to take it beyond that.”

Lederer is already thinking of the next steps. RFL and conference organizer IIR will hold another data quality event, titled “The Research Industry Summit: Solutions That Deliver Quality,” in Chicago on November 6-7. In addition, now that the no- or low-cost solutions have been compiled, he is considering asking vendors for ideas that do have some costs attached to them. Along with identifying more insights that may lead to better data, such an undertaking may get some client-side researchers to really examine the downsides of an unwavering focus on price.

Citing a recent study from Indianapolis-based G & S Research that found dissatisfaction among client-side researchers in the pharmaceutical business, Lederer says he would like to break some of the client mentality that says lowest cost is the most important factor when conducting marketing research. “The same [researchers] who admit in the survey that cost is their biggest consideration when picking a vendor admit that [that approach] is not working, so they know they’re not getting very good research and they are not really saving very much. Because after they nickel-and-dime the vendor for basic research, if they ask for anything more the price goes up so they’re not getting good value. If they’d stop obsessing on cost, maybe they’d get better research.”