Editor’s note: Ben Proctor and Kerri Norton are insights strategists at New York research firm Miner & Co. Studio.
A combined 15 years of ad campaign testing. Work on ads that have launched some of the biggest entertainment properties of the past decade … ads that have redefined some of the nation’s leading providers of media and technology. So, do we call ourselves experts? Not even close.
The reason is that campaign testing remains one of the most elusive aspects of market research. Let’s be honest – if anyone had figured it out, they would be the only research shop in existence and their clients would have no competitors after putting everyone else out of business.
But, we have learned a thing or two in our years behind the curtain. Here are some of our favorites and how they’ve led to our approach to campaign testing.
• Most important is understanding the additive quality of campaigns. Many times, clients want to find and use the TV commercial that scores best in perhaps dozens of rounds of ad testing. And that’s understandable. But the problems arise when the digital team is testing their banner ads in a separate vacuum. And the outdoor team is testing their billboards without knowing that TV commercials are even in development. It’s a new era of new technologies and channels for getting the message out there. In reality, people experience a campaign that spans all of these channels, so it’s important to get a read on a campaign that spans all of these channels.
While we understand the importance of testing individual ads and do it often, a key component of our testing is a campaign walkthrough. Whether a rich online survey, a qualitative assessment where we literally walk people through each phase of a campaign as they would experience it in reality, or a geo-targeted mobile survey that pings people as they move past various campaign elements after the initial launch, we want to measure not only impact but also cohesion. An amazing poster can be little more than artwork if it doesn’t connect back to the rest of the campaign. A groundbreaking TV ad can become a hindrance when all of the outdoor spreads a completely different message. We don’t want to see this happen and that’s why we use various methodologies to understand the whole.
• Norms – some clients live by them, some clients never want to hear about them. We try to think about something a bit different – benchmarking. If we’re able to create a rich enough set of norms that’s on-target with the exact type of content we’re looking at, we’re happy to use them. But things have changed. Even a TV ad is no longer just a TV ad – it’s an experience in and of itself. In a lot of ways it’s nothing like the TV ads from five or 10 years ago, so using data from five or 10 years ago to inform current norms may not be the best approach.
With benchmarking, we look for the best comparative tool that’s going to give us the strongest sense of how well our client’s ads are driving consideration and ultimately action. Sometimes this means norms. Sometimes it means a social media analysis to look at volume, sentiment and engagement (there’s a big difference between a post that’s been retweeted a million times and one that has created a million sparks of unique, thoughtful discussion). Sometimes it means comparison alongside ads from competitors to see whether the message is overshadowing the competition or getting lost in its wake. By thinking about each campaign individually and assessing each client’s particular goal, we use unique benchmarks to get a truer sense of how the campaign will perform in reality.
Furthermore, while norms make us feel “safe” about what we’re putting out there, they many times keep us from taking risks to create truly innovative campaigns that breakthrough. And while “innovation” is sometimes hard to measure in a quantitative survey, as consumers don’t always know how to rate something they haven’t seen before, it makes a unique and mixed approach to campaign research all the more important.
• Mixing monadic and sequential. Whoever came up with monadic testing is a genius. In most cases, people see a single piece of advertising for a single product at one time, so testing a single piece of advertising on its own makes a lot of sense. Of course, however, we can hit a roadblock.
Let’s say we show an ad to 400 people on a given weekend and get crystal-clear feedback that couldn’t be a better road map for moving forward. The next weekend, we test our new ad with 400 people. But they are different people. And now it’s snowing and everyone is angry. And in the middle of the week, some competitor released an ad that looks an awful lot like what we’ve come up with. The comparison is far from apples-to-apples.
Our way to combat this is to pair monadic studies with sequential studies. By testing single ads in isolation in one survey, but also testing a variety of ads against each other in another, we help to minimize the pitfalls of each approach and come up with an answer that has a bit more clout (and a lot more reality) behind it.
All of this is to say that there is no single answer when it comes to campaign testing. With clients rolling out marketing efforts that include TV ads, YouTube mini-movies, digital banners, mobile games, static posters, lenticular posters, animated posters and much, much more, we can’t rely on a single methodology or technology to give us all of the answers. Campaign testing works best when there’s not a set template. Mixing quant, qual, social media and desktop research is key to our approach. And, moving forward, our next project will likely require us to find a completely different way to mix them than we have in the past.
We hope this gets others talking about how they tackle campaigns because, face it, the research community is a small one. If we didn’t have each other to bounce ideas off of, we’d be spending our days staring at the wall.