Listen to this article

Strategies for avoiding respondent fatigue

Editor's note: Matthew Walmsley is chief strategy officer at New York research firm SurveyHealthcare. 

tired manIn late 2017 the British Healthcare Business Intelligence Association Response Rate Task Force released its report Reversing the Decline in HCP Participation, which addressed the industry concern around the willingness of health care providers (HCPs) to take part in market research. Our firm, SurveyHealthcare (SHC), is proud to have provided sample and fieldwork to help complete this research. 

To take it a step further, I sat down with SHC’s management team – Anel Radoncic, senior vice president, programming manager; Anthony Howard, vice president, technical operations; Mary Ellen Fasano, senior vice president, project services, quantitative; and Christina Pereira, vice president, project services, qualitative – to delve into some best practices for streamlining the survey process to mitigate respondent fatigue while maintaining data quality.

Q: What is survey fatigue? How do you define it?

Anel Radoncic: Survey fatigue is the issue which happens during survey taking where the respondent becomes bored, exhausted or uninterested in the survey, which is usually due to repetitiveness of same/similar questions, poor survey design and/or survey length.

Anthony Howard: The consequences of survey fatigue are that a respondent might refuse to finish the study, might decide to stop doing research altogether for our panel or, worst-case scenario, enter thoughtless responses to a study and corrupt the data quality for our client’s research.

Q: How common is survey fatigue?

Radoncic: It is pretty common but more prevalent in poorly designed surveys and surveys that are longer than 45 minutes.

Howard: Common. My team handles the help desk, so we’re on the front lines getting feedback from our panelists. If there is a study with a poor design, we hear about it. Panelists are not afraid to let you know they did not enjoy a particular survey. We try to submit all feedback to the project manager running the study so if they are having struggles with a study, they can present it to the client. Unfortunately, by the time a study is in field, clients are typically against making changes to questions.

Mary Ellen Fasano: Agreed, it is quite common and it’s a problem both for maintaining the health of a panel and for insuring accurate data. When respondents lose interest, their responses become less trustworthy.

Christina Pereira: I would say it's more of an issue for quant. For qualitative, the issue becomes annoyance in being screened out or not scheduled for projects where they have gone through the screening process, since most of our screeners are a bit lengthy and take them through all screening before termination in case a client wants us to reach back out to a screened-out person who was close to qualifying.

Q: What are the top reasons for survey fatigue?

Howard: Poor survey design and poor questionnaire design – plain, boring screens with large blocks of text and repetitive questions are the easiest way to run a respondent into the ground.

Fasano: Interview length, types of questions and relevant subject matter are also factors.

Pereira: Repetitive questions, too many attributes on scales, trying to drill down too much, conjoint designs that seem to go on forever, not pre-testing to see what the respondent user experience will be like.

Q: What is a good/recommended survey length?

Radoncic: I would say the average health care research survey length is around 30-45 minutes. I wouldn’t recommend going over 60 minutes, because then respondents would need to answer the survey in more than one sitting and we wouldn’t accurately be able to track the survey length.

Howard: Depends on the topic and goal of the research. Generally, we find that studies over an hour long tend to be pretty brutal.

Fasano: I think the industry take is that an ideal survey length hovers around 20 minutes for consumer but longer, maybe 30 minutes amongst health care professionals. Much longer than that and you risk impacting the integrity of the data as respondents do get bored and lose interest. A couple of caveats are that if the topic is unique and/or particularly interesting to a respondent, they will remain engaged and provide valuable information for an hour-plus. The incentive or honorarium amount is also a factor. With an appropriate amount, most panel members will answer honestly and thoughtfully throughout the survey regardless of length.

Pereira: Really depends on the topic, expertise of the respondent and survey design. But 30 minutes max will get you the best results for HCP, 20 minutes max for consumer. If the survey is well designed, has different question setups and an interesting topic for the respondent, then you can likely keep them engaged for 45-60 minutes.

Q: Is there a particular time or day that you notice respondents are more prone to take surveys? Is there a particular time or day that you notice respondents are more disengaged?

Howard: The mornings are usually slow when it comes to responses but that’s natural. People are at work and busy in the mornings. After the workday, responses increase. The weekends are very productive. Holidays in the U.S. are also a very productive time. I have not noticed any correlation between time of day and disengagement.

Pereira: I agree with Anthony. I also find that around big holidays that both qualitative response and patient response slows a bit. People find it harder to give us their scheduled or appointed time and tend to have more frequent emergencies or need to be rescheduled a little more frequently than usual.

Q: How do you avoid redundancy when programming the questionnaire?

Radoncic: We provide different visual formats to present the questions to respondents. For example, if there is a question that involves rating, we can present the question in few different visual formats – a table with radio buttons, slider rating, button rating, etc.

Howard: Switching up the way the respondent has to answer. Card-sort on one question, sliding scale on another, rank sort on the next, etc.

Fasano: This varies. If there are several similar types of questions, i.e., rating or rankings, mixing up the format as noted above is very helpful. Within a list of attribute ratings, however, the client may intentionally ask the same question with different wording. This is in part to validate results but also to make sure respondents are paying attention.

Q: Talk a little bit about the importance of making a survey aesthetically pleasing.

Radoncic: At SHC, we keep the survey page very simple and easy to read to avoid any distractions from the main focus – the content. Our surveys are screen-centered and are designed for optimal user experience, with the question on top, answer choices below, followed by the “next” button. 

Howard: The need for an optimized survey interface is crucial. Luckily, Web design is going through a minimalistic stage so we don’t have to be super flashy with the presentation. But nobody wants to look at bland, boring questions for 60 minutes. Small things such as animated sliding scales and card-sorting questions really help retain a respondent’s attention throughout a long study. We tend to see less straightlining with studies where the presentation and design of similar questions are switched up as opposed to studies where the respondent thinks they’re answering the same question multiple times in a row because the question and design are so similar.

Pereira: It needs to be clear and displayed neatly on the screen so they know at a glance how you want them to answer. Filling a screen with a lot of instruction and validation makes the survey tedious and difficult to go through.

Q: What is considered best practice when you receive an open-ended response that clearly lacks thought?

Howard: This usually depends on how the client reacts. If it’s clearly gibberish or a poor response, SHC points it out and throws out the respondent’s entire survey. We want our client’s research to be thorough and accurate. If the client is happy with the answer, though, and there is no obvious or egregious negligence, the respondent’s answers are left in the data. We usually go through and double-check the rest of their responses to ensure they’re a quality respondent.

Fasano: Our project managers review open-end responses before sending data to client. When dealing with physicians, particularly unique specialties or topics, it’s sometimes hard to tell if their responses make sense since we are not experts in their field. It’s a bit easier to flag on the consumer side.

Q: What is industry standard regarding having “N/A” and “I don’t know” as answer choices?

Radoncic: Nine times out of 10, the client will include an option such as “Don’t know,” “None of the above” or “Not applicable,” which excludes any of the given options. It is a simple and honest answer and we don’t want to compromise the integrity/quality of the data by forcing respondents to answer something they legitimately don’t know or are not aware of.

Fasano: From a design standpoint, it does make sense to include a “don’t know” option for most if not all questions. A “don’t know” is often a legit response and respondents need a way to move forward in the survey rather than either dropping off or entering an invalid response to keep going.

Pereira: This is a point where I disagree with the team. In online quant research, I would not allow “don’t know” unless it really makes sense to do so on the type of question we are talking about. On five-, seven-, nine- and 11-point scaled-type questions where you are providing means in your tables it is best to not include a “don’t know” option and force them to choose something on the scale. On other types of questions that are more behavioral than opinion, then the “don’t know” option would make sense. The reporting team on the client’s end should make these decisions based on how the data will end up being reported. It is all about the stats in quant data. Sometimes a valid response would be “not applicable” as well.

Q: What are some techniques that are utilized to prevent survey fatigue?

Radoncic: Speed traps. We can actually warn a respondent during the survey or flag their survey silently in the data so that our client can be aware of who is speeding and can make their most-informed decision.

Fasano: We employ some attention checks when needed, i.e., a long list of 10+ attributes might include an item in the middle somewhere that says something like “Enter 3 as the answer for this statement” in order to determine that they are thoroughly reading the content.

Fasano: Mental breaks. These are often used in surveys and with long surveys in particular. We provide a progress bar at the top of the survey screen that updates dynamically so that respondents can gauge remaining time needed.

Radoncic: As a precaution against straightlining, we have a script behind the scenes that will silently flag a respondent for straightlining for our clients to review and determine their outcome.

Q: What about splitting long surveys into a series of shorter surveys?

Radoncic: I actually don’t suggest splitting the surveys in multiple sittings. I think respondents should have one consistent mind-set throughout a single survey.

Fasano: I think splitting the surveys into shorter series is often ideal but not our call to make as the data collectors. Our clients would decide whether or not they want to ask everything at once – which is more common – or split into shorter surveys. As much talk as there is around survey fatigue and respondent cooperation rates, in the end budget and deadline are the driving factors for most clients.

Q: What is the best piece of advice you would offer your clients to better streamline the survey process?

Radoncic: Clients should ensure the survey content is clear, concise and easy to understand. Questionnaires should be error-free, written in a way that is easy to understand with clear programming instructions and specific requirements.

Howard: Keep the length-of-interview as short as possible. This helps with testing the questionnaire, testing the redirects and keeping respondents engaged. Vary the presentation and answering of questions. From a technical standpoint, eliminate as many variable pass-ins on entry links as possible. It takes a long time to populate on our end, slowing down the process, and it also asks for trouble. There are too many variables to keep track of and it causes technical issues when they’re not passed in as expected. I’m referring to things like city, state, first name, zip, segment, etc. Those don’t all have to be passed in on an entry link. They can be loaded on the back end. These things drastically cut down setup and fielding time. 

Fasano: A few things. Keep the survey length down. Questions should be concise and clear. Vary the types of questions being asked and the structure of the questions. Be transparent about the expected length-of-interview. Be respectful of the respondent’s time, i.e., offer an appropriate honorarium amount. And only ask questions that are relevant to the topic or for data analysis.

Pereira: I would also add that it is important to pre-test the quant survey and not only visualize the user experience but talk to a respondent about their experience going through the survey to try and correct any issue before launching.