Two oft-debated issues - Web advertising metrics and research data quality - got a lot of air time at the annual Advertising Research Foundation Re:think conference in New York last month.

ComScore Chairman Gian Fulgoni moderated a panel on the growing importance of the Internet as an advertising medium. He opened the proceedings with a brief overview of the Web and its role as an ad vehicle. Despite a 25.6 percent growth rate in spending for online ads in 2007 over 2006, online still only accounts for 7 percent of total ad spending, lagging behind direct mail (21 percent), broadcast TV (15 percent) and even the supposedly near-dead newspaper (14 percent).

Once the panelists were brought into the conversation, the focus naturally turned to the problem of metrics for measuring an online ad’s effectiveness. Some argued that clicks are a misleading measurement and may not be the best indication of an ad’s effectiveness. But, in the minds of many, clicks are all we have. As one panelist mentioned, marketers and advertisers must measure what they should not just what they can. Therefore, it seems clear that new metrics are needed, but no one can seem to agree on what those metrics should be. As Lee Doyle, CEO of ad agency Mediaedge:cia North America put it: “Are clicks and conversions really the right things to measure? You need to measure what’s relevant to each client.”

And, rather than viewing the Web and its metrics in a vacuum, consideration should be given to online advertising’s role as part of an entire marketing campaign. “We have to understand how the media work together. We should all push for that approach as researchers,” said Yahoo!’s Chief of Insights Peter Daboll.

Those sentiments were echoed by Stephen Kim, global marketing director, Microsoft Digital Advertising Solutions. Arguments about which metrics are most valid certainly have a place but they have at times become too much of the focus. “It’s not black and white. We all understand the caveats. The finger-pointing is counterproductive. We need to move to a place of looking at how data fit together,” Kim said.

Direct request

The following day, Stan Sthanunathan, vice president, marketing and strategy and insights at The Coca-Cola Company, moderated a discussion on data quality that included four research company CEOs and four high-level client-company researchers. Eric Salama, CEO of The Kantar Group, made the most direct request to client companies. While acknowledging that researchers in client firms, like those on the vendor side, face pressure to keep costs down, Salama said that if clients truly support the idea of quality data, they must vote with their dollars. “A lot of work has been done in the past year on how we can raise data quality standards. But if you don’t allocate money to the things you think are important, things won’t change. You have to use your financial clout. Clients have a responsibility to make sure that good quality is rewarded and bad quality isn’t,” he said.

Ron Gailey, senior vice president, director of research and customer insight at Washington Mutual, said his firm is paying extra for quality and has internal programs in place to track and validate data and evaluate data quality, which has led him to desire more partnerships with research suppliers, under which both sides can battle data quality problems. The stakes are huge, he said. “I work at a company that loves research. If I get it wrong, they won’t use me anymore. If I get it right, millions of dollars can be made,” Gailey said.

Jim Nyce, senior vice president, consumer insight and strategy, Kraft Foods, referenced his early days at Quaker Oats and mentioned that Quaker Oats had an internal vendor who validated research interviews. But those types of practices are now less common, as more and more interviewing has been outsourced and procurement departments have shifted the focus to cost. As a result, some researchers have taken their eye off the ball in terms of data quality. “Data quality isn’t an enormous issue but it is an issue,” Nyce said. “Our ability to provide knowledge and our credibility rest on the quality of the data we provide. If we don’t have quality data, our work won’t have impact. Ensuring quality data is everyone’s job.”

Work together

Both talks ended up echoing similar themes. On Web metrics and research data quality, identifying the problems and agreeing to work together on solutions seemed to be the order of the day.

Some fresh perspectives and a holistic view of Web advertising’s role in selling products and building brands are needed to help marketers get a full and accurate picture of what online ads can and can’t do.

And with research data quality, it seems that enough discussion has occurred to confirm that quality is something worth fighting - and paying - for. It comes down to research vendors and clients really determining what is important to them.

A long-term view is required. There are dollars to be saved here and there in the short term but the damaging impact of decisions based on poor-quality data can have far-reaching and long-lasting repercussions.