Changing minds

Editor's note: Walter K. Lindenmann is senior vice president and director of research, Ketchum Public Relations Worldwide, New York. This article is adapted from a presentation made in Frankfurt, Germany on May 10, 1996 at a special public relations evaluation workshop initiated by Ketchum Public Relations, GmbH, Munich.

During the past four to five years there has been a considerable amount of downsizing in business establishments, both large and small, in all four corners of the globe. This often has resulted in cuts in the amount of funds available for advertising, marketing and public relations programs and activities.

To justify new expenditures in any of these areas, communications professionals have found it increasingly necessary to justify their existence. Top corporate executives no longer simply approve advertising and/or public relations programs or activities because they look or sound creative, or are something that have always been done - rather, they are first asking questions such as these:

"Will this advertising and/or public relations effort actually move the needle in the right direction? Will the new communications activities change what people know, what they think and feel, and how they are inclined to act? What impact - if any - will the advertising and public relations programs have in changing consumer and opinion-leader awareness, retention, attitude and behavior levels?"

As the research director of one of the world's largest public relations counseling firms - Ketchum Public Relations Worldwide - I am well aware of a growing interest in my field in the need to measure public relations effectiveness from a bottom-line perspective. Within our own agency, the number of PR measurement and evaluation projects that we have designed and carried out for our clients during the past four to five years has more than tripled.

Not only that, during the past two years, we have experienced a surge of interest in PR measurement that crosses international boundaries. More and more of our clients are asking us to measure the effectiveness of their PR programs and activities in several countries.

Practitioners uneasy

Despite the growing interest in evaluation, often I find that public relations practitioners are uneasy about incorporating measurement and evaluation into their activities. Many of them contend they do not know where to begin. At Ketchum, we've tried to simplify the process by developing what we call a public relations "effectiveness yardstick" - a straightforward set of guidelines or standards that the professional PR practitioner can follow to measure PR effectiveness.

It involves a two-step process: first, setting public relations objectives and then determining at what levels you wish to measure public relations effectiveness.

Step 1: Setting objectives

To begin, the public relations practitioner must ask himself or herself: What are the goals or objectives of the public relations program? What is the PR program or activity seeking to accomplish? To assess the impact of public relations, we need to determine who within our organization are the appropriate persons to speak on behalf of the organization. We need to pinpoint our messages, our target audiences and our channels of communication, and then use each of these as gauges to determine our effectiveness in achieving our goals.

Step 2: Determining levels of PR measurement

After we have set our objectives we have to decide what we want to measure: Is it how good a PR job we did? Is it finding out if anyone heard us or paid attention to our PR efforts? Or, is it determining if anyone is about to think or act differently because of our PR efforts?

I label these three different measures of PR effectiveness:

Level 1, the basic level for measuring PR outputs;

Level 2, the intermediate level for measuring PR outgrowths; and,

Level 3, the advanced level for measuring PR outcomes.

Like marks on a yardstick or a ruler each level identifies a higher plateau for the measurement of public relations success or failure.

At our firm, we like to use a yardstick to graphically show the three different levels, each one higher and more advanced than the one before it. The lowest step on the ladder - or the first marker on my imaginary yardstick - is Level 1, which measures what we, or our organization, actually did. For example, if our organization happened to be a hospital, health clinic or pharmaceutical company, did we prepare an attractive-looking brochure for our patients or prospective customers? Was the press conference that we held to publicize or promote a new product or service well attended? Did the media pick up and use our press releases or announcements? Did our messages get transmitted to the specific audience groups we were trying to reach?

Level 1 measures PR outputs; it examines how well PR people present themselves, how they handle given activities or events. At this level, the PR practitioner measures the amount of exposure his or her organization received in the media, the total number of placements, the total number of audience impressions, and/or the likelihood of having reached specific target audience groups.

Easy to carry out

This type of measurement is relatively easy to carry out; that's why I call it a basic measure. To measure outputs, PR practitioners often use content analysis techniques to track or measure publicity placements or conduct simple public opinion polls to find out if targeted groups have been exposed to certain messages.

An example: A well-known technology company headquartered in New York City with offices throughout the U.S. and Europe and which markets its hardware products and selected software services on both sides of the Atlantic held two press conferences to announce six major new products.

The press conferences were held on the same day - one in New York City, the other in Paris - and generated considerable media coverage in the U.S. and Europe. Within a matter of a few days, 491 print and broadcast news and feature stories appeared in the press - 373 in U.S. media and another 118 in the European media.

We were retained to determine how favorable toward the company and its new products the press coverage ended up being. More specifically, our client sought answers to these questions:

  • How did the media in the U.S. and Europe handle this major announcement of the company pertaining to its new products and services?
  • Was press treatment favorable, neutral or unfavorable toward the company and toward key themes and messages important to the company?
  • Which spokespersons were quoted most frequently, in what context and to what extent?
  • How did the media treat the company in comparison to its principal competitors?
  • How did the media's handling of the announcement correlate to consumer inquiries and/or purchase behavior patterns pertaining to the company's products and services?

Content analysis of the 491 news and feature stories was the methodology used to measure PR outputs in this case. We examined the press coverage by first coding and categorizing each story on the basis of 37 different analytical variables, classifying the stories by such categories as type of media in which they appeared, by company, competitor and topic mentions, by position or stance taken by the media, by persons and/or organizations quoted by the media, and so on.

Then we entered this information in our computer, processed and analyzed the data, and prepared a detailed report for our client giving the findings and their implications. The entire effort was carried out relatively quickly and inexpensively. It took six weeks from start to finish.

Output measures can be summarized this way:

Level 3: Advanced (Outcomes)

Level 2: Intermediate (Outgrowths)

Level 1: Basic (Outputs)

         Measuring . . .

                   Targeted Audiences

                   Impressions

                   Media Placements

Keep in mind that measuring outputs is only the most basic level of PR measurement. The only thing you're doing at this level is measuring whether or not your messages, or your organization's messages, were actually disseminated and picked up by the media.

Don't stop at Level 1

Whatever you do, don't stop at Level 1. Move up the yardstick or the ladder to higher levels. Level 2 is somewhat more sophisticated. At this level, PR practitioners measure whether target audience groups actually received the messages directed at them, whether they paid attention to those messages, whether they understood and retained them.

Level 2 contains PR outgrowth measures. To measure outgrowths, PR practitioners usually rely on a mix of qualitative and quantitative data collection techniques, using focus groups, depth interviews with opinion-leaders and extensive polling of key target audience groups either by telephone, face-to-face, or - at least in the U.S., and I realize that cultures can change from country to country - by conducting surveys using the mails.

An example: One of our clients - a well-known beverage company - sells its products around the world. During the past two years it has been actively distributing background and promotional materials about its products and services to reporters and editors in the general and trade press in four countries in Europe - France, Germany, Great Britain and Spain; in four Latin American countries - Argentina, Brazil, Costa Rica, Mexico - and Puerto Rico; and in six Asian and Pacific Rim countries - China, Hong Kong, Japan, Korea, the Philippines and Taiwan.

The company wished to determine how familiar its products and brands were among key media it was targeting in those 15 countries, in comparison to its major competitors. It also was interested in measuring - both qualitatively and quantitatively - how much attention those in the media were paying to the company's publicity efforts, whether reporters and editors were aware of the range of the client's products and services, and whether those in the media were retaining key messages that the company was disseminating through its publicity materials.

For this client, we suggested a series of one-on-one depth interviews with key representatives of the media in each of the countries being targeted. We are convinced that by focusing on issues of importance to the client and by conducting interviews with selected reporters and editors that we have obtained first-hand information from those in the media regarding how much they know, how much they understand, and how much information they have retained relating to our client and its products and services.

Here's a second example of a research project that we have fielded that sought to measure outgrowths - that is, how much people know, understand and retain. A technology client in the U.S. several years ago developed a new diagnostic imaging machine that brought the latest technology to hospitals and clinics that do not have specialists in residence. The technology allowed patients to go to their local hospital for an on-the-spot live ultrasound exam that was transmitted by telephone lines to a specialist in another city.

The company held a first-time-ever series of remote, live demonstrations of an actual exam taking place in Memphis, Tenn.
The exam was shown live at two major trade shows: the annual conference of the American Heart Association in Dallas, and the annual conference of the Radiological Society of North America in Chicago.

The company wanted immediate feedback on the new machine from the cardiologists and radiologists attending the two trade shows. It wanted to assess physician familiarity, comprehension and retention levels of the benefits offered by its new machine. To obtain the data needed, we suggested a series of on-site, face-to-face, intercept interviews with those physicians who attended the two conferences. The company arranged for special demonstrations of its new machine during the first two days of each convention. During the third day we sent a team of interviewers onto the convention floor to conduct intercept interviews.

A total of 100 cardiovascular physicians and surgeons were interviewed at the Dallas trade show and another 239 radiologists and hospital administrators were interviewed in Chicago.

In both cases, interviews were completed in one day, data were tabulated and analyzed overnight, and the company not only obtained useful background information concerning physician familiarity, comprehension and retention levels pertaining to its new machine, but it also was able to prepare and distribute special press releases summarizing the research findings and their broader implications, displaying how technology could be used to improve patient diagnostic services.

Outgrowth measures can be summarized this way:

Level 3: Advanced (Outcomes)


Level 2: Intermediate (Outgrowths)

                  Measuring . . .

                      Retention

                      Comprehension

                      Awareness

                      Receptivity

Level 1: Basic (Outputs)
Measuring . . .
Targeted Audiences
Impressions
Media Placements

Most advanced

Level 3 is the most advanced PR measurement level of all. When one reaches this higher end of the effectiveness yardstick, what is being measured is outcomes - such things as opinion, attitude, and behavior change.

To measure outcomes, the PR practitioner needs to rely on such techniques as before-and-after polls (pre- and post-tests); on the development and use of experimental and quasi-experimental research designs; on the use of unobtrusive data collection methods such as observation, participation and role-playing; on the use of advanced data analysis techniques (such as perceptual mapping, psychographic analysis, factor and cluster analysis, and conjoint analysis); or on the conducting of comprehensive, multi-faceted communications audits.

Let me give you an example of the type of research project that begins to measure not only message receptivity, awareness levels, comprehension and retention but also starts to get at opinion, attitude and behavior change.

One of our U.S. clients is the Dole Food Company. Several years ago, in collaboration with the Society for Nutrition Education, Dole developed a special CD-ROM program designed to educate children between the ages of 9 and 10 and their teachers about the importance of proper nutrition and the role that eating five servings of fruits and vegetables a day can play in achieving proper nutrition. Students at the third-grade level in 178 classes in 65 different schools in five different states were selected by Dole to participate in a pilot test of the CD-ROM educational program.

Dole wanted to measure the knowledge, attitude and behavior levels of a selected group of third-grade teachers and their students regarding fruits and vegetables and proper nutrition, both before and after these audiences were exposed to the CD-ROM program prior to a national rollout.

Our research design consisted of distribution of self-administered questionnaires to approximately 1,000 students and 40 teachers to measure awareness, attitude and behavior levels before the CD-ROM introduction, followed by the distribution of a virtually identical self-administered questionnaire to the same approximately 1,000 students and 40 teachers to measure awareness, attitude and behavior levels four months after the program introduction.

Completed, matched pre- and post-questionnaires were filled out and returned by students in 44 of the 178 classes. In those 44 classes, a total of 1,038 students and 37 teachers participated in the before-and-after study.

The data found the 5 A Day Adventures CD-ROM program to be extremely successful. More importantly, the proportion of students giving correct answers increased from the before to the after phase on 17 of the 18 questions that were asked.
The proportion of students expressing an interest in talking to other family members about the importance of eating five servings of fruits and vegetables a day increased from 45.1 percent before they had been exposed to the program to 67.0 percent after exposure.

More than eight out of every 10 teachers felt their students had found the program easy to use and a similar proportion of teachers were convinced that 5 A Day Adventures had encouraged their students to eat more fruits and vegetables. Eight in 10 of the teachers also felt the program had encouraged they, themselves, to eat more fruits and vegetables.

Based on the research findings, Dole modified the CD-ROM program and then launched a national rollout. One year after the completion of this pilot research project, 13,000 schools throughout the U.S. were participating in the program and 50,000 CD-ROM disks had been distributed to schools and teachers across the country.

Fascinating problem

Here is a second example of a study that we designed and carried out to measure change at the outcome level. Several years ago, another food company came to us with a fascinating problem. They wanted to determine which is more effective in promoting a new product: public relations alone, advertising alone, or public relations and advertising together.

For this client we used a variation of what is known as the "classic research design" methodology. First, we identified four comparable communities in four different sections of the country. Then, we conducted telephone interviews in all four communities - 250 interviews per community - to determine familiarity, attitude and behavior levels relating to the company, its products and services.

The company's new product was then introduced in the first community using only public relations techniques. The new product was introduced at the exact same time in the second community using only advertising techniques. It was introduced at the same time in the third community using a mix of public relations and advertising techniques. In the fourth community it received no public relations or advertising support.

After the introductions, identical follow-up telephone interviews were conducted in all four locations. Once again, 250 consumers were interviewed in each community. We probed to determine familiarity, attitude and behavior levels relating to the company, its products and its services. We then compared data from the pre- and post-interviews in all four communities to determine which of the different communications approaches was most effective.

When I tell people about this before-and-after quasi-experimental design they always ask me who won. Which communications approach turned out to be most effective? Keep in mind, results can change depending on the product, depending on the community, depending on the types of PR and advertising techniques utilized.

In this particular instance, however, communications turned out to be most effective in the community in which the new product was introduced using public relations techniques only. It was next most effective in the community in which the new product was introduced using a mix of PR and advertising techniques. In third place was the community in which advertising only was used.

And, thank goodness, especially for those of us who are professional communicators, in last place was the community in which the new product was introduced without any public relations or advertising support at all.

Outcome measures can be summarized this way:

Level 3: Advanced (Outcomes)

          Measuring . . .

                Behavior Change

                Attitude Change

                Opinion Change

Level 2: Intermediate (Outgrowths)

          Measuring . . .

          Retention

          Comprehension

          Awareness

          Receptivity

Level 1: Basic (Outputs)

          Measuring . . .

               Targeted Audiences

               Impressions

               Media Placements

No single method

These are just some of the ways to measure success in public relations. But to put things into perspective, there are two final pieces of advice for those of you who plan to plunge more deeply into measuring public relations effectiveness.
First, it is important to recognize that there is no one simplistic method for measuring PR effectiveness. Depending upon which level of measurement is required, an array of different tools and techniques are needed to properly assess PR impact.
And, finally, it is extremely important before you attempt to evaluate anything you do that you first set specific goals and objectives against which the activities of your programs can eventually be measured. That's called formulative evaluation.
The time to think about evaluation is before a public relations program has been launched, not after it is under way.