Editor’s note: Allyson Kuper is a consultant and Tim Glowa is the co-founder at marketing analytics company Bug Insights, Texas.

Grid questions are some of the most commonly used in marketing research. The format is simple to use and relatively easy for the respondent to understand. The response options (such as a level of agreement scale, from strongly agree to strongly disagree) are across the top of the grid and each row asks a separate question.

From a data collection perspective, this is intended to be a very efficient way of collecting opinions from respondents. Survey participants must only view the scale once and can simply go row-by-row answering each of the questions on the same page. 

The big problem is that grid questions are simply not working as a data collection method. The format is nearly impossible to view effectively on smaller-screened devices and more and more people are leveraging mobile devices to participate in online surveys. However, the most troubling problem is that grid questions result in bad data.

Let’s explore each of these options in more detail.

The format is not mobile-friendly

As the use of mobile devices explodes, more and more respondents are completing surveys online through their handheld device. Surprisingly, many research providers are not ready for this change. According to Fortune, 17 percent of researchers provided mobile surveys as of December 2012. Since then the penetration of mobile surveys has likely increased but often the survey designs have not been altered or optimized to view on mobile screens. Our research has shown that up to one-third of survey-takers complete surveys on a mobile device and that number is rising.

With such a large percentage of survey participants accessing surveys via mobile devices, we have to consider the impact of question design. Unfortunately, though the concept of grid questions seems to make sense for marketing research, in practice these types of questions are detrimental to research efforts. In a recent survey we tested the abandonment rate for grid questions relative to other survey questions. We found that survey-takers are three times more likely to abandon surveys when they reach a grid question relative to other question types and this is consistent with what we have seen for other studies.

As mobile users increase, it will become more important to have content that is perfectly suited to these devices. For researchers, this means it must be formatted for the screen to allow for a respondent-friendly survey experience. If the user is required to scroll horizontally and vertically, they will more likely abandon the survey.

Grid questions result in bad data

Although grid questions may simplify the data collection process, they result in the collection of bad survey data. While answering a grid question, participants often provide nearly identical answers for all grid items, with only a few scale items typically being used to rate most (if not all) items. We have consistently seen that when faced with a grid question, 90 percent of participants use only two or three scale items for 80 percent of the grid items. This results in very little differentiation in the answers given and it provides very little insight into true preferences. 

In a study that we conducted on health care program preferences, participants were asked a typical grid question. They were asked to indicate the importance of a series of plan features on a scale of 1 to 6 where 1 equaled “not important at all” and 6 equaled “completely important.” When we analyzed the results, we saw that three of the plan features were rated on average as a 3 (“somewhat not important”). However the other 17 items all averaged between a 4 and a 5 (“somewhat important” and “important”). This lack of differentiation is consistent with what we have seen in other studies where grid questions have been used and it results in data that is essentially meaningless. While we may have an idea as to what three items are the least important, we have little data to differentiate between the other 17 items.   

Grid questions may also lead to bad data due to the time that participants spend answering them. When participants are presented with a grid full of survey questions all linked to the same scale on the page, they are less likely to take time to read and think through the question. We’ve seen that the time to complete one survey question averages between 10 and 15 seconds when presented on a single page. However when grid questions are presented, survey respondents only spend an average of three seconds per question. This raises an important question around how carefully survey participants are both reading and responding to these grid questions.

Addressing the problem

Design a better survey: Think about your audience and design a survey that makes sense for the way it is going to be delivered and viewed. You must account for the fact that many survey participants are going to be using a mobile device to complete the survey. Designing a mobile-optimized survey is especially important when the target survey group is being reached via e-mail or from a browser page – this medium will significantly increase the likelihood that participants will be using a mobile screen for viewing.

Test your survey on a mobile device: Ensure that your survey appears correctly on a mobile device before launching. Avoid using grid-style questions and questions that require too much scrolling as neither can be easily viewed on a mobile device or tablet. Allow time for testing on these devices and making adjustments to questionnaires in the project planning process.

Be cautious about survey length: It is tempting to take all grid question items and convert them into stand-alone question types. While this might be the simplest way to adapt your survey, it could create problems with survey length and increase the likelihood of abandonment. Instead, be sure to ask only those questions that are necessary. Pare down survey questions by using methods like factor analysis to determine overlap. This can help to detect duplication and identify questions that can be eliminated.

Do your own analysis: Leverage data from previous surveys conducted in order to better understand where lack of variation and differentiation has occurred. Determine the frequency that respondents used each scale point when answering grid questions. Using the frequency, analyze the percentage of respondents who only used one, two or three scale points on a specific percentage of grid items (for example, 70 percent, 80 percent or 90 percent of questions). This will help to determine the value of data collected.

Understand your audience and pre-test your survey: The audience may play a large part in determining whether or not mobile use will be prevalent. For example, if you are targeting a demographic such as Millennials or workers in the technology industry, mobile usage will likely be much higher and you need to design and test accordingly. Do a pre-test with a subset of the target population. This will help to determine likelihood of mobile usage, the effectiveness of mobile survey functionality and will identify if there is a need for refinement prior to going live.

Leverage best/worst conjoint: Best/worst conjoint uses the simple concept of trade-off analysis in order to better understand respondent preferences – often allowing us to understand an individual’s preferences better than he or she could even articulate.

Through a simple series of trade-off questions that ask respondents to identify the “best” option and “worst” option in a brief list, a best/worst conjoint survey provides us with a clearly defined picture of participant preferences. It allows us to understand not only the ordinal importance of tested features but also the magnitude relative to other features tested (i.e., how much more important one feature is relative to another).

This results in significantly greater differentiation in the data when compared to typical survey questions (such as grids) and provides us with far richer data that is truly actionable. Best/worst conjoint can also be optimized for mobile browsers, making it an ideal substitute for the typical grid question in many instances.

 

“With mobile surveys, market research gets a makeover.” Fortune Magazine, March 25, 2014.

Methodology:
Bug Insights conducted an online study fielded from November 5 to November 15, 2014 and collected a total of 633 responses. The purpose of study was to measure the attitudes and opinions of full-time U.S. employees. Criteria for participating in the study included full-time, employed Americans receiving benefits from their employers. For the purpose of the study, questions were presented in a number of formats including conjoint and grid type questions. The 15 minute survey measured attitudes and opinions toward benefits and rewards, identified employee frustrations, included a best/word conjoint study on health care preferences and if the participant had recently moved to a new to position, the study assessed reasons behind leaving their previous employer.