Effectiveness by association?

In the minds of many advertisers, if you want a television ad to be its most effective, you place it on a show watched by a lot of people in your target market. But what if certain shows, from ratings winners to also-rans, do a better job of engaging their viewers? And what if that engagement carries over to the commercials, thereby enhancing their ability to communicate and persuade? Is your commercial better off being seen by fewer - but more involved - viewers?

These were just some of the questions behind "Engaging the Viewer," a study conducted last summer by Starcom Media Services, the media division of Leo Burnett advertising in Chicago. "A big issue at the moment in the media industry, among people buying and selling television time, is about the value of program environment," says Kate Lynch, research director of Starcom. "Does the right environment make people pay more attention to the commercials? Different kinds of programs attract different levels of attention. Does that have an effect on attention to the commercial breaks and then on people’s recall or awareness or what they take away from the commercials?"

Using NASA-developed technology administered by Blue Bell, Pa.-based Capita Systems Inc., a subsidiary of Capita Research Group Inc., Starcom used brainwave analysis to measure respondents’ involvement in a TV show and a series of commercials, to find out if their engagement in the show carried over to the ads that followed it. "I’ve been looking for different technologies or methods to try and evaluate [involvement] for years. We’ve done day-after recall studies, we’ve done focus groups, we’ve done lots of different things. But [using brainwave analysis] interested me because it was a truly objective measure and something that we could do a pilot study on quite easily," Lynch says.

Contrary to the ER-like imagery it conjures, Capita’s brainwave measuring equipment is non-invasive and doesn’t require affixing pads to worried brows with sticky gels. Rather, the apparatus resembles a pair of headphones, says David Hunter, president and CEO of Capita Systems. "We have improved the NASA technology with a new headset on which we have a patent pending. It takes EEG measurements continuously from the surface of the head and converts them into an ENGAGEMENT INDEXSM [EI] five times per second through a proprietary algorithm NASA developed during 10 years of research. We define engagement as the amount of electrical activity in the cognitive portion of the brain," Hunter says.

"In terms of methodology, it has some aspects of what you would see with a dial test but we feel that this measure offers additional information to what you would get from a dial test," says Kristina Farago, marketing director, Capita Systems. "And, it’s objective. You’re not relying on a respondent to turn a dial. They just sit there and watch."

Viewer engagement

"Engaging the Viewer" was designed to find out if different programs produce different levels of attention/involvement; if different attention levels carry over into the commercial break; and if viewer engagement has an effect on communication or recall of the advertisements.

The respondents were pre-recruited to be regular viewers of one or more of the four dramas used in the study. The programs were chosen for their ability to hold their audience (based on syndicated measures) and their cost per rating point (CPP) differential. Starcom wanted to determine if an expensive CPP show was more successful at engaging the viewer than a cheaper CPP show, and if so, by how much. "We were finding, using our media data, that the cheaper programs were very cost-effective, but clients would say they still wanted to be in the expensive prime-time programs because people pay more attention to them. And I would say, ‘Well, how do we know that?’ " says Lynch.

"We wanted to answer client questions and give ourselves more confidence in the way we plan our television buys. There were a lot of gut feelings and myths in this marketplace about what was right and what was wrong and it seemed very few people had ever evaluated them. Today’s television world is so different than what it was 10 years ago. You have so many different choices."

The study used a sample of men and women aged 25-50. Respondents completed a demographic questionnaire and then watched one of two videos on a standard TV. Each tape contained an expensive and an inexpensive program and the same group of commercials. Tape 1 contained Program 1, a high-cost show, followed by six commercials and Program 2, a low-cost show, followed by six more commercials. Tape 2 contained Program 3, a different low-cost show, followed by the same six commercials and then Program 4, a different high-cost show, followed again by the same six commercials as those at the end of Tape 1. After viewing the tape, respondents completed a questionnaire on their interest in and attentiveness to the programs they had just watched.

Lynch was initially concerned about the small samples. "Capita said 10 to 12 was fine and I said, ‘I’d be more comfortable with a few more.’ But actually when we evaluated eight or nine to see how stable the results are, they were fine at 10 and 12," she says.

"The more specific you are about the questions you’re asking, the better results you’re going to get, even more so with this kind of technology because you’re looking at small samples and you need to be focused on your recruitment criteria, any pre- and post- questions that you ask, and the actual segments of the programs and the ads that you’re showing them."

Impact lasted

The research found that the commercials after the higher EI-scoring programs earned higher EI scores. "It was surprising how much more effective the ads were in this particular environment. That’s not going to be the case for all commercials but for the ones we tested it was," Lynch says.

And higher cost didn’t necessarily lead to higher engagement. Program 2 (a low-cost show) and Program 4 (a high-cost show) earned the highest EI scores. "We could distinguish a difference between the programs and it wasn’t always related to the marketplace’s traditional valuation [based on the cost of ad time]. There were significant differences in attention levels and they did impact attention to the ads. The patterns we saw applied to just about every test we did. If there was a difference it carried throughout the break," Lynch says.

Respondents’ preconceived feelings about a show affected their subjective ratings of it but didn’t carry over to the show’s EI. The program with the highest CPP as well as the highest Nielsen ratings earned the highest subjective scores and EI measurements. In contrast, one of the low-cost/low-rated programs earned high EI scores from respondents who hadn’t seen the show before. The two shows earning lower EI scores kept loyal viewers engaged but didn’t hold the attention of casual viewers.

"The study has made us feel a lot more confident about some of the changes in clients’ television schedules that we’ve been recommending. We’ve also had lots of ideas for new things we can test. We feel strongly about the importance of putting the ads in the right programs," Lynch says.

More projects in mind

Lynch says she has a number of projects in mind based on client questions. If the right application comes along, she’ll use brainwave analysis again. "It was the perfect methodology for what we wanted to test. It’s an innovative technique. We won’t use it for everything but it’s another in our array of tools.

"I think we feel more comfortable overall with our understanding about how TV advertising works. There are a million things you can learn and you just need to keep digging. It’s not acceptable now for me to say ‘I don’t know.’ I’ve got to try and get some of the answers."