See through their eyes

Editor's note: Ron Sellers is president of Ellison Research, Phoenix.

One of the most valuable exercises for professionals in the marketing research world isn't writing questionnaires - it's answering them.

Many studies, of course, screen researchers out with an up-front security question. Others don't. When given the opportunity, try playing the respondent. Doing this a few times will lead you to a conclusion that is difficult to escape: researchers often don't ask very good questions.

Think about it: we often ask people to remember things they did six months or a year ago. We expect them to give us exact numbers on the spur of the moment on questions such as how much television they watch each month or how much they paid for their last set of tires. We put them in unrealistic hypothetical situations and ask them how they are likely to react.

And why do we do this? Because we want information. Information is obviously the lifeblood of the research industry. But do we ever stop to think whether the information is viable?

Researchers spend time strategizing about what statistics to glean from the data, what scales to use on the questions, and how to order the questions so as to avoid bias. Do we ever ask ourselves whether the questions can be fairly answered at all?

I travel a tremendous amount. I recently received a questionnaire asking me how many times I've rented a vehicle in the last 12 months. Not only that, but for each time I rented a car, the research company wanted to know what size I rented, what company I rented from, what city I rented in, and how much I paid.

Now, that may be fine for someone who rents a car once or twice a year. But for someone who has rented about 30 cars this year, just how likely is it that I would remember all of that information - or take the time to fill it all out if I did remember?

This is not a one-time problem. I've been asked the same types of questions about hotels (do they really expect me to recall whether last February I stayed in the Hyatt or the Marriott in Atlanta?). I've been asked to recall to the penny and to the tenth of a gallon just how much fuel I put in my car in the past week. I've been asked to recall how many days I spent on a cruise five years ago.

I'm a researcher. I live and breathe this stuff. And I couldn't answer any of those questions. Why do we think respondents, who are donating a few minutes of their time, will be able to do any better?

One of the most effective ways of checking for this downfall is very simple: once you've designed a questionnaire (or had one designed for you), answer it yourself. Go through each and every question and answer it as if you were the respondent. It's a simple task, but one that too few researchers undertake. If the questionnaire confuses you, it will confuse respondents. If it bores you, it will bore respondents. If you can't remember the kind of information it asks for, neither can respondents.

Better yet, administer the questionnaire to a few non-research people - your boss, your spouse, your mother, it doesn't matter, as long as they're people who have the ability to stare at you and say, "Now how am I supposed to remember how much I spent on closing costs when I bought my house eight years ago?" When you get a response like that, pay attention to it.

Bad questions come in all forms, but there are a number of things we need to remember. These might seem incredibly basic, but from many of the questionnaires I receive, it's painfully obvious that researchers aren't paying attention to them.

People don't remember things in the level of detail you wish they would.

Just because you want to know how much money people spent on fast food last month doesn't mean they can accurately tell you that information. Although your world may be revolving around that fast-food project for the next month, to respondents it's a minor detail in their busy lives.

People don't remember things for as long as you wish they would.

Ask people about how much they've spent on gasoline in the last week and they might be able to tell you. Ask them how much they've spent in the last month and they may be willing to go through a few mental exercises to give you a reasonable answer ("Let's see, I typically spend 20 bucks on gas each week...four weeks in a month..."). Ask them how much they spend in gas each year, and you're liable to get a wild guess just so they can move on to the next question.

Similarly, we have to be reasonable with our time frames. One client wanted us to contact people and ask them about a donation they had made to a non-profit organization six months ago. Six months after the fact, we can't expect people to remember why they gave (or even to remember that they did, in fact, give).

People will give you answers you're not expecting.

This is particularly a problem with self-administered questionnaires. It's frustrating to be asked to circle all the reasons I no longer shop at a particular store, only to have my particular reasons not listed anywhere on the questionnaire, with no place to write in answers. Hint: if you're getting mail questionnaires back with all sorts of scribbled notes from respondents, you're probably not giving them sufficient response options.

One telephone interviewer asked me to name all the brands I could of a particular consumer product which happened to relate to one of my hobbies. After giving her about 15 brands (including a number of lesser-known brands which I had to spell out for her because they weren't on the list), I was already tired of the questionnaire after the first question. How much better it would have been for the researchers to ask for the first few brands that came to mind.

Remember: every possible answer needs to have a place to go. That's the purpose of options such as "other" and "unsure." One of the most common and annoying mistakes is the lack of a "don't know/refused" response on telephone questionnaires. Do we want to find out what people really think, or force them to think along the lines of the responses we wrote into the questionnaire?

Just because we want 40 minutes' worth of information doesn't mean people want to give us 40 minutes' worth of time.

I've thrown away a number of questionnaires just because I wasn't about to spend half an hour answering the questions. One mail survey I got last year asked me to rate 40 different hotels on a series of 12 different image attributes. Sorry, but I'm not going to answer 480 questions, particularly since they took up just two pages of this eight-page questionnaire. I would love to know what their response rate was on that study!

Questionnaire length may be the area where researchers battle with clients more than just about any other. The client attitude is too often "I want the information, so ask it." We need to do more to communicate with clients (internal or external) that overly lengthy questionnaires end up promoting respondent fatigue, lower response rates (both on this study and on all others), and response bias.

The overall questionnaire isn't the only thing that gets too long. Individual questions, particularly those with a lot of attributes, also promote respondent fatigue. You may want to have your company rated on 30 different attributes, but that doesn't mean they can all be shoved into one question.

People can't answer some questions because they really don't know.

Not all questions can be asked directly. In at least a third of the advertising-related focus groups I've moderated, the client has insisted we ask "Would seeing this advertising make you more likely to buy this product?" There are two basic facts about advertising. One is that advertising works - good advertising literally can increase sales. The other is that consumers either don't recognize this fact, or deny it outright. The typical response to a question like that is "I don't buy products based on whether I like their ads!"

In doing research for product names, it's the same thing. People won't go buy the new Buick because it's called the Rendezvous. The name may contribute to the image of the vehicle, form the basis for some clever promotion, or help position the vehicle in consumers' minds. All of those factors can lead to increased sales. But don't ask people whether naming it the Rendezvous will make them more likely to purchase the vehicle, because that's a question they just can't honestly answer.

Similarly, advertising recall studies often suffer from this. Clients sometimes demand that respondents receive a battery of questions about where they saw certain advertisements. When in doubt, people tend to answer "television" because they assume they must have seen it on television. When they're bombarded by hundreds of advertising messages a day, as some estimates show, is it really realistic to expect people to remember exactly where and when they saw the advertisement?

Pricing studies also fall victim to this. It's easy to describe a new product to someone over the phone, then ask "How much would you pay for this?" or "If this were available for $19.95, would you buy it?" Respondents may give you an answer, but is it an answer based in reality? Companies often don't want to go through expensive methodologies such as conjoint studies, discrete choice analysis, or other techniques which can more accurately address the issue of price, so they just toss in a simple question or two and rely on the resulting data. Unfortunately, the resulting data often has no relation at all to what will actually happen in the marketplace.

What is reasonable?

It is up to researchers to understand what is reasonable to ask in a study, and what isn't. We cannot fall victim to the mindset that the client (or the boss) gets whatever they want, regardless of whether it is realistic. It's the researcher's responsibility to communicate the pitfalls of that approach in language that will make sense to the client, without relying on industry jargon. (Hopefully, this will avoid the response of one client who, when we suggested that he raise his sample size from his planned-for 100 people, objected that "You're too worried about doing this scientifically. I don't need it done scientifically, I just need it done right!")

The next time you finish a questionnaire, take one more step: answer it yourself. If you struggle with it, then it should be obvious that it's not really finished.