Ask the right questions

Editor’s note: Kyle Langley is managing partner, research and analytics, at Multicultural Insights, Inc., a Coral Gables, Fla., research firm.

“Ask a stupid question, get a stupid answer.” “Garbage in, garbage out.” There are many old adages that apply to what can happen with data and information when the inputs and outcomes are less than optimal. The same goes for research queries, and even today, often much is left to be desired when it comes to questions and questionnaires in data gathering.

The desired outcomes in research depend on insight and analysis, but always begin with questions. Not just questions, but the right questions. Sometimes data is unattainable at the end of a research project simply because key inputs were eliminated due to time constraints, budget restrictions or were left out because of poor analytic planning. Not only can bad questions leave a client or company without valuable and much-needed data, poorly-worded questions can also bias outcomes. Bias is much worse, especially when it may not be readily apparent.

Michael Singletary of the University of Tennessee writes in Mass Communications Research that, “Questionnaires must be written to accomplish three objectives: comprehension, accuracy and completion.” There are a lot of things that can get in the way of achieving these three goals.

Before discussing some of the mistakes that are made when questionnaires are written, some background on why poorly-written questions enter into expensive research projects is in order.

  • Lack of preparation and proofing

Many sets of eyes need to review a questionnaire prior to field. Everyone from the client to the chief analyst should review the questionnaire to determine that the necessary inputs are there and that the inputs are worded correctly. The client must ensure that all questions are entered to deliver desired outcomes while the analyst has to ensure necessary questions are included to get at all of the requested analysis. Certain advanced analytics require specific inputs and without them optimum analysis cannot be performed.

  • Too many chefs

The old saying is applicable here. Oftentimes too many personnel are in the mix when it comes to a questionnaire’s design. If an advertising agency is involved, it could mean the client, the agency and the research company are all involved. And, many times changes are made for reasons not focused on sound research such as “because we can” or to justify someone’s position. To make it easier, a key, experienced researcher should be identified. This researcher has final authority to approve the questionnaire. During the evolution of the project all changes and additional inputs should be tracked and recorded.

  • Lack of experience

Too often younger research associates are put in charge of questionnaire writing and preparation. While it’s not at all a bad idea to help them grow in their careers and knowledge, a junior researcher or account planner should never have the final say on questionnaire design and approval.

  • Rushing

Many times research requires a fast turnaround - although not nearly as much in quantitative research. Quantitative projects are almost always longer in planning and preparation. Often at the end of budgetary years there can be a last-minute rush to spend money so it is not lost. This is where problems can arise. Don’t rush. And beware of the other stumbling blocks mentioned here that can cause multiple problems. Research money is a much-valued commodity. Don’t blow the project because the process needed to be completed in a hurry.

  • Big egos

Unfortunately, there are people in all businesses who change things only because they can. In the research business changes are often made to questionnaires with no methodological justification. Don’t be afraid to step in and make the right facts known on how a specifically-worded question will affect the overall plan. Hierarchy being what it is within companies, create a checks-and-balances system among client, agency and research vendor.

  • Leaving it all to the research company

While your research vendor should know all of the intricacies to questionnaire development, leaving it all to them is a bad idea. While they may get wording and methodological precision down, they usually have no idea of the exact outcomes that are of priority. Implement a well-coordinated effort to design a project questionnaire and identify the top goals and priorities and continually discuss them among all parties during the genesis of design and execution of a project.

  • Ulterior motives

This should not be a problem in “regular” research, but it can be problematic with political research of any kind. Ulterior motives not only cover the macro idea of influencing the process to get the results YOU want but can also manifest themselves in more benign ways. Many of those ways are discussed in the next section.

Common mistakes

So, now you’ve got your team coordinated, your goals set and you know what you want to find out of the project. Easy, right? Well, it should be, but a lot of mistakes are made in the actual execution of the survey instrument. Some of the most common questionnaire mistakes involve the following:

  • Either/or questions

The either/or query is just that. It asks respondents to identify some aspect of a question by giving them an either/or opportunity. While this may sound fine it often is not. For example and simplicity, asking a respondent if their favorite color is blue or green is not helpful and can skew data because the execution was biased from the start. Their favorite color may, in fact, be red. Political groups with a partisan agenda use this method many times. Example: “Would you describe Bill Clinton as an adulterer or a crook?” He may be neither or both, but an unsuspecting public may give unscrupulous or unknowing researchers inaccurate answers. More often than not it is just a poorly-written question, which leaves respondents with no way out. At first appearance of an either/or question, ask yourself if an open-ended question might be more appropriate.

  • Double-barreled questions

Double-barreled questions also often leave the respondent with no way out. An example is “Do you think your boss is friendly and fair?” The boss may in fact be both, but he or she could be only one or the other or neither, which forces the respondent into an uncomfortable situation. To think that the two concepts are related in some way could be trouble. Singletary suggests that the question writer and reviewers should always put themselves in the place of the respondent as the survey instrument is designed and finalized. It goes without saying that all questionnaires should be tested for time, sequence and clarity before field operations commence. If you are doing cultural in-language research the questionnaire should also be back-translated with native speakers to make sure nuances and translations are clear and correct.

  • Future intent/usage questions

Some future intent questions are workable, such as “What is the likelihood you will purchase a new car in the next 12 months?” But often, companies seeking more precise measures of profit potential seek to answer questions that can skew data and make it unbelievable. For example, a question that may be a stretch is, “How much do think you will spend on men’s underwear in the next 12 months?” Does anyone really know the answer to this question? In our experience, even among those who think they know, big differences exist among cultural segments, with some tending to exaggerate on future purchase intent. A way to get a more potentially accurate answer is to base future numbers on past 12-month purchases within certain vertical product segments. With consumer products and goods, purchase cycles are more often and consistent than with large purchases such as houses and cars. One would hope that, on average, underwear purchases are happening far more often and consistently than new car purchases. Deeply delving into future purchase intent can be a slippery slope so navigate with caution and be clear and conservative on financial potential.

  • Scales usage questions

Scales are a valuable part of research but should be designed with expertise and never overused. It is well known that different cultural segments use and understand scales in different ways. Hispanics tend to over-rate while some Anglos and Europeans will give only average scores for something they rate highly. While this is better left to an entire essay on understanding scales usage among cultural segments, it has clearly been supported that using varying scales, 1-10, 1-7 and 1-5 along with rotation of inputs, will keep respondents from getting overloaded and simply falling into a pattern of responses. Something known as minimum response options is also a part of this equation. Never overload with response options which may cause the respondent to tune out. When using agreement scales try to keep it to no more than four or five options - excellent, good, fair, bad - or, very much agree, somewhat agree, disagree, very much disagree, no opinion.

  • Hypothetical questions

Hypothetical questions are just that: hypothetical. While these questions are often used in research, using their results to build business models and make business decisions can be problematic. Why? Attitudes do not always match behaviors. An example would be to ask respondents “If car maker X offered a six-door pickup truck would you be likely to buy one?” Even lacking information on the vehicle’s appearance or cost, if the respondent is a fan of carmaker X or a pickup truck owner they may say yes, but would never have any intent to purchase the vehicle in question once they saw it. Getting at accurate research answers can be done in better ways than using hypothetical situations in quantitative research (i.e., qualitative research).

  • Negatively-phrased/double-negative questions

One might be surprised that so many negatively-worded questions are included in questionnaires. “Would it not be fair to say this is untrue?” If you had stop and think about that question imagine being on the phone and having to think something like that through. Imagine how such questions can take up valuable time in questionnaire completion. Converse and Presser (1986) identified many words and phrases that can wreak havoc by using what they call implicit negatives. Implicit negatives are words and phrases that seem to have meaning beyond their face value. Be careful with using negatives and don’t assume that a mirror positive version is a true opposite. Stay away from negative words like “not,” “forbid” and the like.

  • Leading questions

This example is seen more often in partisan political research. Setting the respondent up with leading information in advance of the question is bad research. “The media has really been down on George Bush because of Iraq. Would you agree he is not doing a good job on Iraq policy?” If you don’t believe such questions are asked, simply read the inputs into many of the political polls that are published in print and online. The reputable political research companies usually avoid such bias, but it occasionally can be seen in consumer research as well. Don’t lead. Just ask a simple straightforward question.

Accomplish the goals

While there are many more examples, the above-mentioned are the most common areas where bad questionnaire design and execution can cause trouble during the research process. One thing to keep in mind is: keep it simple. It may sound cliché, but it is true. Keep it simple and apply the ideas and concepts discussed in this article. By doing so, one can accomplish the goals of comprehension, accuracy and completion. This in turn produces quality data for the creation of valuable analytics. Success at comprehension, accuracy and completion almost always equals happy agencies and happy clients.