Editor’s note: Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

"Survey research is old. Anybody can do it with the software we have these days."

This is a misperception that seems quite widespread, to the point where it has almost become a marketing research urban legend. As a marketing science person, I am a heavy user of survey research data and am concerned that fundamental survey research skills are eroding. PowerPoint does not make one a competent presenter nor can Word transform one into a professional writer. Likewise, user-friendly questionnaire design software is not a substitute for genuine skill and experience.

Survey research is not easy. Ask the pollsters! In the interest of disclosure, I want to make it clear that none of my company's revenue is derived from data collection, though I do frequently provide input for sample and questionnaire design. The ax I will grind is that a considerable amount of time and budget is wasted because of poor questionnaire design. We often spend more time and more money than we have to in order to collect less valuable data. That's a lose-lose-lose proposition.

Survey quality

Research objectives have the biggest impact on survey quality. Unfortunately, they can be blurry and like many questionnaires, essentially a product of an ad hoc committee. One result is that nice-to-know questions may outnumber need-to-know questions.

Excessive questionnaire length has long been an issue in marketing research and, with mobile surveys on the rise, will become even more so. I do not wish to launch a global campaign against questionnaire obesity but in MR it's a serious problem. Inspired by the Body Mass Index, I would like to propose a Questionnaire Mass Index (QMI):

QMI = (Time wasted on nice-to-know questions)2 / (Time required for need-to-know questions) x 100

So, if your average interview length is 20 minutes and respondents, on average, spend four minutes answering questions that actually have little business meaning, your QMI score would be 100. The lower the score, the better. Though I am being tongue-in-cheek here, a simple guideline such as this can help us discipline ourselves and improve the health of our questionnaires.

There are countless ways to reduce the flab in our surveys. Even on a mobile device we still have much more leeway than in the "old days" when surveys were often conducted by telephone and administered by a human interviewer. A 10-to-15-minute mobile survey will generally be able to cover more ground than a 10-to-15-minute telephone survey.

I've noticed that very similar questions are sometimes asked multiple times in the same questionnaire. While this is occasionally deliberate – for instance, when very important questions are asked in slightly different ways – more often than not it's an oversight. This wastes time and can confuse or irritate respondents. Moreover, members of online panels have been profiled to some degree and key demographics and some psychographics have already been collected. There may be no need to ask these questions again. Another way to save time is to split questionnaires into chunks so that only the most critical questions are asked of all respondents. Clever questionnaire design can reduce questionnaire length, lower costs and improve response quality.

Ask yourself whether ordinary consumers will interpret the questions in your survey the way you do. They are not brand managers or marketing researchers. Also ask yourself if you would be able to answer your own questions accurately! Highly detailed recall questions have always been discouraged by survey research professionals and the folks who established consumer diary panels decades ago were well aware that even diary data are not 100 percent accurate. Answers to questions about purchase, for example, should be interpreted directionally and should not be used as substitutes for actual sales figures when the latter are available.

Attitudes and opinions

Surveys are particularly useful for uncovering attitudes and opinions, which leave no trail at the cash register. Knowing what consumers buy is important but knowing why they buy it is also important. Deriving the why from the what is much harder than is sometimes assumed. That said, this is where survey research often fails badly, usually because of poor questionnaire design. Merely copying and pasting attitudinal statements from old questionnaires or from a lengthy, brainstormed list of statements is asking for trouble.

When developing your own scales, think first of the factors – the underlying dimensions – then the items that you will use to represent these factors. For an in-depth look at how to measure attitudes and opinions, see Psychometrics: An Introduction (Furr and Bacharach), an up-to-date and readable book on psychometrics. Another good resource is Marketing Scales, an online repository of more than 3,500 attitude and opinion scales.

You don't need to wed yourself to five- or seven-point agree-disagree scales, which are prone to straightlining. Maximum difference scaling, simple sorting tasks and various other alternatives often work better. However, if the statements themselves do not make sense to respondents, or mean different things to them than they do to you, you'll have a problem regardless of the type of question you've settled on!

If you conduct an international project, local culture should be first and foremost on your mind. What seems straightforward to you may be unfathomable or even offensive to those from other cultures, even when they are quite fluent in your native tongue. Don't assume that all statements or items can be translated directly into other languages, either. Sometimes only rough translations are possible because the corresponding vocabulary does not exist in the local language. What may seem like a mundane concept to you may not survive the voyage to another society.

Data quality

When certain types of questions are asked again and again – awareness and usage questions, for example – there is no need to keep reinventing the wheel. In fact, this is bad practice that can run up costs and lower data quality. Consider building banks of standard questions and questionnaire templates for different kinds of surveys. This is one way questionnaire design software can come in handy and raise productivity. QUAID is an artificial intelligence tool that can help you improve the wording of your questions.

Sample design is another place where survey research can go awry. In my opinion – and I suspect Byron Sharp and his colleagues at the Ehrenberg-Bass Institute would agree – we often survey a slice of consumers that is far too narrow. Not infrequently, a client wishes to interview women aged 18-24, for instance, when the potential consumer base for their product is vastly more diverse. Often, these sorts of screening criteria are driven by gut feel or emerged from a few focus groups and have no true empirical foundation. Casting a net which is too narrow runs up research costs, increases field time and can give us a distorted picture of reality. This is another lose-lose-lose proposition.

Though advanced analytics can be conducted after the fact, they usually work best when designed into the research. "Begin at the end and work backwards" is sound advice and is especially pertinent when the data will be analyzed beyond the cross-tab level. For example, if you intend to run key driver analysis – the simplest example of which would be correlating product ratings with overall liking – make sure to ask all respondents the questions that will be used in the analysis. Data imputation of many kinds is now practical but it is still preferable to have all respondents answer the most important questions. Involving a marketing scientist in questionnaire design for projects requiring advanced analytics is recommended. Ideally, this will be the person who will conduct the modeling when the data arrive.

There are formatting and layout issues as well as optimization for device type (e.g., PC vs. mobile) I haven't gotten into that are covered in some of the books I cite below.

Adding more value

Consumer surveys are not easy and now that we have many other data sources (e.g., transaction records, social media), they can actually add more value than ever through their synergy with these other data. Access to a variety of data can help you both design your survey and interpret its results. Though often the butt of criticism (including mine), questionnaire design tools do have many benefits. A former employer of mine began developing such a tool in the late 1980s, so I know firsthand that they can cut down considerably on the clerical aspects of questionnaire design, giving us more time to think about things that really matter, and also reduce errors. However, in the hands of inexperienced or poorly-trained researchers these tools may do more harm than good. As with miracle diets, there are risks and your questionnaires may actually gain weight.

Further reading

This has been a very small article about a very big topic. For those who wish to learn more – and there is so much to learn – there are online sources, seminars and university courses. There are many books as well. Sharon Lohr has written an excellent and popular textbook on sampling, Sampling: Design and Analysis. Many excellent books have also been published on survey research and questionnaire design, and three I've found particularly helpful are: Internet, Phone, Mail and Mixed-Mode Surveys (Dillman et al.), The Science of Web Surveys (Tourangeau et al.) and Web Survey Methodology (Callegaro et al.). The Psychology of Survey Response (Tourangeau et al.) and Asking Questions (Bradburn et al.) have stood the test of time and I highly recommend them. Public Opinion Quarterly, an AAPOR publication, is an excellent source for the latest research on survey research.

Â