Help them help you

Editor's note: Robert DeVall is director, and Charles Colby is principal, chief methodologist and founder, at Rockbridge Associates Inc., a Great Falls, Va., research firm.

With an ever-expanding list of options at consumers’ fingertips and the figurative pedestal given to them via social media, organizations have come to recognize the growing importance of keeping their customers happy. Billions of dollars are spent on efforts to improve the customer experience and the companies that are most successful typically rely on feedback from customers to drive these efforts. This is generally good news for market researchers but also presents a challenge: As more and more companies vie for customers’ attention, it becomes harder to get. This can result in lower response rates, which increase the likelihood of non-response bias and can ultimately make the findings less reliable. This begs the question: What can market researchers do to maximize response rates?

To answer that question, first we need to fully understand what drives response rates. The response rate, in simplest terms, is the number of people who completed the survey divided by the number of people in the sample eligible for the survey. This means to effectively raise response rates, we must focus both on the recruiting tool and the survey instrument. For surveys administered online where respondents are invited via e-mail, maximizing response rates entails 1) getting the respondent to open the e-mail, 2) getting them to start the survey and 3) getting them to finish the survey. Failing to take any of these facets into account can result in response rates that are less than optimal.

Objective 1: Get the respondent to open the e-mail

Before sending the e-mail invitation it is a great idea to send a heads-up e-mail or letter notifying the respondent that they can expect an invitation to take the survey. This is especially useful for customer or non-profit member surveys, where the heads-up communication comes from an executive or recognizable individual within the organization. This provides credibility for the company administering the survey and can also be used to request certain e-mail addresses be placed on a safe-sender list.

The next step is getting the e-mail invitation into the respondent’s inbox. This can be more challenging than it seems as spam blockers don’t readily share their filtering practices (with good reason), nor are all spam blockers alike in the criteria they use for filtering. That means there is no fail-proof formula to avoid getting caught in the spam trap but there are some things that market researchers should do (or not do) to at least have a fighting chance.

  1. Avoid words like “free” and “win,” ALL CAPS, excessive punctuation “!!!,” symbols and special characters.
  2. Follow the requirements of the CAN-SPAM Act. This includes putting a valid physical mailing address in your e-mail and offering a clear opt-out link.
  3. Don’t spoof e-mails. Sending from a mail server you own (and have properly configured) helps ensure that you pass spam filter authentication tests.

Researchers should also pay careful attention to the FROM and SUBJECT fields of the e-mail. The FROM field should be recognizable and should avoid generic terms (info, feedback, customer support, etc.). Using the company name is the most common practice but using a well-known company representative’s name can also be effective. The SUBJECT line is essentially the headline of the e-mail and should make the recipient want to keep reading. It is important to convey the topic of the e-mail and highlight what is in it for the respondent. Ideally, subject lines should be less than 50 characters. Clear, concise subject lines are more likely to be opened.

The next thing to consider is the date and time the e-mail is delivered. The objective here is to get the e-mail to the respondent at the time they are most likely to read it. There are several theories on when the best time to send e-mails is, with the general consensus being to send them during the daytime, with Tuesday, Wednesday and Thursday being the best days to do so. That being said, this may differ depending on the audience being targeted. For instance, some groups may be more accessible in the evenings. It is also important to take into account holidays and time zone differences.

Lastly, send reminder e-mails. There are a number of reasons a respondent may not have completed the survey besides not wanting to. They may have missed the original e-mail, meant to take the survey later and forgotten, started the survey and gotten interrupted, etc. The number of reminder e-mails depends on the audience and the length of the data collection period but two reminder e-mails is generally sufficient to remind respondents without annoying them. It is also critical that reminder e-mails be targeted to those who have not completed the survey. Ideally, targeting should be taken a step further to tailor the message based on where the respondent is in the process. For example, if a respondent started the survey but did not finish they may get a “you’re almost done” e-mail, while someone who has not started would get an e-mail that more closely resembles the original invitation.

Objective 2: Get the respondent to start the survey

The e-mail invitation has passed through the anti-spam gauntlet, found its way to the respondent’s in-box and they are taking the time to read it. It is now time to drive home the value proposition of the survey. It is also important to use the e-mail to set expectations for the survey, including: how much time it will take, the topics that will be covered, privacy/security of the information and how the information will be used.

To many, “reward” typically means some sort of tangible incentive (gift card, drawing, etc.). This type of reward is definitely effective but they are not always appropriate or in the budget. Researchers should also be cognizant of biases that certain incentives could create (e.g., offering a discount for the company being researched is likely to elicit responses from those who plan to shop at the company again, potentially leaving out shoppers that were so dissatisfied that they do not plan to return).

While tangible rewards are certainly a nice way to show respondents appreciation for their time, researchers should not overlook the intangible rewards respondents receive from taking a survey. Most people are helpful by nature and for many taking surveys can be empowering, as it enables them to actively help companies improve. To this end it is important to communicate how their feedback will be used to improve the product, service or experience. It is also beneficial to close the feedback loop whenever possible by communicating the findings from the research and positive changes that are made based on customer feedback.

The body of the e-mail should also instill trust. As previously mentioned, being CAN-SPAM-compliant is a step in the right direction but market researchers can go further by including contact information for questions and support. Respondents are also generally more likely to take the survey if anonymity and confidentiality are promised. Regardless, a link to the privacy policy should be included in the e-mail or on the first screen of the survey to ensure respondents are clear on how their data will be used.

Another way to instill trust and establish authenticity is to personalize the e-mail using merge fields. This also makes the respondent feel that they are receiving personal attention and distinguishes them from the group. Respondents are more likely to ignore a generic e-mail addressed to a group, as they are prone to assume someone else from the group will respond.

Lastly, tell respondents how to start the survey. This may seem obvious but is often overlooked. The link to the survey should be in the upper half of the e-mail and be immediately noticeable. Most respondents skim the e-mail at best and many stop after the first paragraph. They are not going to take the time to search for a link that is embedded deep within the text of the e-mail.

Objective 3: Get the respondent to finish the survey

The final step in the path to higher response rates is getting respondents to finish the survey. This is all about minimizing the “cost” of taking the survey. In order to minimize the cost to respondents it is imperative that the survey be as short as possible, easy-to-take and convenient.

Ask less. The time it takes to complete the survey is the primary burden the respondent faces and often one of the more challenging things to avoid. It is important to identify and include only the questions that are needed to meet the objectives and avoid the nice-to-know questions. The question wording should also be clear and concise and skip logic should be used to avoid asking questions that are not applicable. Lastly, include a progress bar. Respondents appreciate being able to see how far along they are in the survey and seeing the “finish line” can help motivate them to complete it.

Make it easy. The last thing a respondent should feel when taking a survey is frustration. Frustration can stem from unclear question wording, insufficient answer choices, clunky design and errors in the programming. The vocabulary used in the survey should be simple and jargon-free. Answer choices should be comprehensive. Respondents often get annoyed when they are forced to select an answer that does not apply to them, therefore, “other” and “none of the above” options should be included where applicable.

Using interactive scales and sliders is a great way to engage respondents but features like these can also detract from the experience if they do not make sense for the questions being asked. Question formats should also be used consistently for each question type. This allows the respondent to get a sense of what the question is asking before even reading it. Of course, the most important thing to avoid is programming errors. Test the survey, test it again and then do a soft launch to a portion of the sample to ensure there are no issues.

Make it convenient. Giving respondents the freedom to take the survey where and when they want to increases the likelihood of participation. Making the survey accessible on a variety of devices is paramount to providing this flexibility. Mobile accessibility is particularly valuable as it allows respondents to take the survey during their down time (e.g., waiting for the doctor, traveling, etc.). It is also important to allow respondents to save their responses and pick up where they left off, as a respondent is unlikely to enter responses more than once. Again, this gives respondents the flexibility to complete the survey on their own time and at their own pace.

A methodical approach

Maximizing response rates requires a methodical approach that takes into account the audience and topics being addressed and tailors all aspects of the data collection process accordingly. These guidelines provide a solid footing for achieving higher response rates but unfortunately sometimes response rates can still end up being less than optimal depending on the audience and topic of the survey. So what then? Thankfully, lower response rates do not necessarily signal disaster for the study.

Responders and non-responders do not come from different planets. The reasons for not responding to surveys are typically due to a lack of time or unwillingness to complete surveys, rather than to a dramatically different viewpoint on the study topic. The result is that the general conclusions from a study are likely to remain the same regardless of response rate.1

The point can be illustrated by running the math on a hypothetical situation. Suppose an online satisfaction survey achieved a 25 percent response rate and that the satisfaction level was 65 percent. Qualitatively, the conclusion may be that satisfaction is “OK” but there is room for improvement. Assume that there is non-response bias and that non-responders are only 55 percent satisfied. What is the impact of non-response? If we had doubled our efforts with a more rigorous data collection methodology and achieved a 50 percent response rate, the satisfaction rating would be 60 percent, only five points off from the statistic computed using the methodology with the lower response rate.2 The conclusion would remain much the same.

Another factor to consider is that measures of comparisons across brands or over time will be similarly affected by the same response bias, allowing valid measures of differences and/or change. So long as the methodology is consistent, stakeholders can take comfort that differences and changes are probably due to real factors and not a response bias.  With this in mind, it is important to ensure that the methodologies remain consistent. 

This is not to say response rates aren’t important, just that survey data should not be thrown out wholesale because the response rate is not approaching 100 percent.

 

FOOTNOTES
1 No study is the same and researchers need to make a judgment call as to whether non-responders could be dramatically different and design methodologies to minimize the impact.
2 ((25% x 65%) + (25% x 55%))/50% = 60%