Aiming for the best response possible

Editor’s note: Susan Frede is vice president, research and panel management at the Cincinnati office of research firm TNS.

Respondent quality has been a key concern in the marketing research industry over the last several years. Thinking more broadly about quality and designing high-quality research is key to overall quality improvement. It is important for researchers to follow best practices for conducting online research in order to consistently deliver meaningful, actionable information to clients. Research-on-research needs to be conducted in order to deliver sound best-practice recommendations for the industry. TNS has recently examined four specific questions that address the fundamental issue of research quality.

Question #1: Should only generic survey invitations be sent to respondents?

The survey invitation is important since it is the first contact with a prospective respondent. It can cause respondents to decide not to respond as well as impact how they respond.

TNS tested five concepts using two different survey invitations. The generic invitation simply stated that a new survey was available, without telling the respondent what the subject was, while the non-generic invitations provided the subject.

Sending invitations that identify the subject does not increase response rates as some have speculated (Table 1). In addition, key measures results for the groups receiving the generic and non-generic invitations are not significantly different, suggesting the type of invitation does not have the potential to change business decisions (Table 1).

The invitations that identify the subject do have the potential to impact category and brand usage questions. This includes higher frequency of category usage, higher brand usage for several brands and a slightly higher number of brands used for the non-generic groups. TNS speculates that telling respondents the subject of the survey causes them to answer differently in order to qualify for the survey or the possible placement of a product.

By sending generic survey invitations, the possibility of self-selection bias is reduced, which may be higher for certain categories. In addition, not providing a subject reduces the likelihood of respondents intentionally misrepresenting themselves or providing inaccurate information to qualify for surveys.

Question #2: Do survey reminders help or hurt response rates?

With Internet surveys, reminders are often sent to respondents because there is little or no cost implication. However, data quality and respondent retention may be compromised when consumers are bombarded with communication for the same survey.

TNS tested five concepts using survey reminders and then tested the same five concepts without using survey reminders. For the groups receiving a reminder, only respondents who had not completed the survey at the time of the reminder received the actual reminder.

Response rates are not consistently impacted by reminders (Table 2). For three of the five concepts, response rates are slightly higher when respondents are sent a reminder, while the other two rates are slightly lower. Only Concept 2 receives a significant increase in response rate, so reminders do not generally have the impact some expect. In addition, key measure results for the two groups are not significantly different, suggesting sending or not sending reminders does not have the potential to change business decisions (Table 2). Also, sending reminders does not change the representivity of the samples on demographics or category/brand usage.

Therefore, it is not recommended that respondents be sent reminders about surveys they have not completed. With a managed access panel, reminders do not increase response rates or impact sample representivity.

Question #3: Does excluding partial completes impact data reliability?

One priority for every survey is to maximize the number of respondents fully completing surveys, which is best accomplished through highly-engaging questionnaires. However, it is also possible to set a partial completion point for different survey types and include all respondents who answered questions through that point.

TNS analyzed data from six monadic concept tests to understand the impact of incomplete surveys. The partial completion point for these tests is the purchase intent question.

Sample size does not dramatically increase when setting a partial completion point rather than requiring full completion (Table 3). Generally, 40 or fewer respondents are added to each concept leg when the partial completion point is set at purchase intent. This represents less than a 10 percent increase in sample size. The variation in the number of partial completes in each concept test is likely driven by the subject of the survey, as well as questionnaire differences (e.g., number of questions prior to purchase intent, total length of the survey, etc.).

The respondents who drop out are somewhat different demographically from those who complete the entire survey. Those dropping out are more likely to be older, retired and male. There are also some brand usage and habit differences between those dropping out and those completing the entire survey. Any demographic, brand usage and habit differences can lead to less-representative samples, so it is important to keep dropouts to a minimum.

Systematically excluding certain kinds of respondents can introduce bias, so it is important to determine whether there are differences in purchase intent when partial completes are excluded. Although purchase intent ratings tend to be slightly lower for those dropping out versus those completing the entire survey, there are simply too few respondents making it to the partial completion point (purchase intent) to change the overall results enough to matter (Table 3). Concept Test 6 is the only one to see scores shift by two percentage points or more. This is likely due to the slightly greater number of partial completes in this particular study and suggests that, with a greater proportion of partial completes, overall purchase intent could be impacted.

By including partial completes in the data, the risk of dropouts influencing the results is negated. Purchase intent scores for the partial completes tend to be lower than scores for those completing the entire survey, so dropouts have the potential to impact data when they are a large proportion of a sample. TNS recommends clients periodically examine the data from respondents dropping out to determine whether or not the overall findings are impacted. Certain topics or types of surveys may be more prone to dropouts, so understanding the potential impact allows informed decisions to be made on how to handle partial completes.

Question #4: Does it matter what day of the week field starts and how long field is open?

One of the main advantages of using the Internet to collect marketing research data is speed - data can be collected very quickly. However, there is concern that speed may negatively affect the quality of the data. Data and business decisions may be impacted depending on the day of week the field starts or by how long the field is open.

TNS fielded a 25-cell test to address questions around the appropriate field period for online studies. The research-on-research study involved five concepts. Each concept had five separate samples, one launched each day of the week (Monday through Friday). Respondents had seven days from the day the sample was launched to respond.

Day one and day two of the field have the largest proportion of completes (Figure 1). There are very few completes on days five through seven.

Earlier responders (days one to three of field) tend to be slightly different demographically. Earlier responders tend to be older and part of smaller households with no children and lower-income. They are also more likely to be white and retired. Later responders (days four to seven) tend to be those who may have busier lives (i.e., employed full time, children/families). These demographic differences suggest that field should remain open at least four days in order to get a representative sample.

Purchase intent and key measures are consistent with the seven-day field period data when looking at different cumulative totals. Table 4 shows combined data for the five concepts, but even for individual concepts differences are not observed. There are also no meaningful differences on demographics and habits across these four groups. This data, coupled with completion rates, suggests a four-day field period is sufficient to get stable results.

There are very few significant differences between weekday and weekend responders on key measures, demographics and habits. There are also no meaningful demographic and habit differences based on the day of the week the field launched. Key measure scores do tend to be slightly lower for Thursday launches and slightly higher for Friday launches but the differences are generally small and the same pattern is not observed for all five individual concepts. All of this suggests that field can be started any day of the week.

Based on this research-on-research, a four-day field period will yield stable results. This leads to more representative samples and ultimately more accurate data and business decisions. The day field launches does not appear to impact data, so there is no need to launch on a consistent day of the week. There is also no need to include both weekends and weekdays in the field period since there are no differences between the two.

Research rigor

It is important as an industry that research design decisions are backed up by research rigor. The introduction of new technologies presents an ever-changing dynamic to the ways that surveys are conducted. To that end, there will always be a need to create better, faster and more efficient methods of gathering and analyzing data. Following best practices and continuing to strive for perfection in survey and study design results in better quality that will allow researchers to have confidence in their business decisions.