Let's meet the respondents more than halfway

Editor’s note: Laura Davies is vice president of panel strategy at Vision Critical, a Vancouver research firm.

As research client concern has grown over the quality and consistency of data drawn from surveys conducted via online panels, the industry has worked together to find ways to improve data quality - from developing an ISO standard for online panels to creating software solutions to identify and screen out individual respondents on the basis of past behavior. These efforts are typically framed around the spectre of the professional respondent: the too few who take too many surveys, do so inattentively or misleadingly, perhaps because their motivations are primarily or solely pecuniary. The solutions proposed are therefore generally focused on how to spot these behaviors, including deliberate traps set to do so, in order to eliminate the data from these respondents or even preclude the respondent from future participation.

These are vital steps to take in the short term to improve the quality of the data our clients receive. However these techniques are about treating the symptoms and do not examine what it is about the nature of online research that has led to the rise in these behaviors among those who participate. In the process, the industry has tended to demonize those upon whom we rely - the willing research participants - while failing to recognize the responsibility that we, as researchers and panel owners, hold in creating the conditions in which promote problem behaviors. Furthermore, these approaches ignore the changing Internet landscape in which online panels exist. This article seeks to redress some of that balance as well as to try to point the way toward some options we may have to genuinely improve research online - both now, and for the future.

Range of motivations

It is well known that there are a range of motivations that we try to appeal to when persuading people to take part in research, whether ad hoc or on an ongoing basis as a panel member. Broadly speaking, these form a continuum between reasons intrinsic to the research itself, such as simple curiosity, pleasure in expressing one’s opinion, contributing to a product’s design, influencing a decision, seeing one’s views reported in the press etc.; through to motivations extrinsic to the research, such as receiving a reward, financial payoff or even some form of advantage over non-participants (e.g., access to final survey results, the ability to use the survey process to communicate opinions and product/service needs to companies and organizations). We try to create a balance so that our panels attract, and our studies include, different people that may be motivated in these different ways. What this amounts to is a package or offering that we promise to participants in exchange for their time, energy and ideas.

As in any ongoing relationship, both parties have a set of expectations regarding their end of the bargain: if either party defaults on the bargain, then the other may be inclined also to do so, in a game of tit for tat. Every experience a member has with a panel influences their likelihood to continue participating as well as the manner in which they do so. Panelists who are not satisfied with their overall experience of belonging to the panel are more inclined to exhibit the kinds of survey satisficing behavior (originally described by Jon Krosnick) that lead to issues in data quality. They are also more inclined to cease participating altogether.

Most genuine research panel owners do their best to deliver what they offer. Much time is spent on providing interactive content to appeal to those who wish to connect beyond the commissioned surveys. Feedback on the use and application of research is distributed via newsletters and rewards promised are fulfilled upon (even if they take time to accumulate). Generally speaking, panel owners try to address panel members with a friendly and appreciative tone. However, these elements are only a part of the package that makes up the member’s experience and are not in themselves sufficient to maintaining overall satisfaction.

Core activity

In terms of the relationship between the panel and the member, by far the largest portion of interaction is through the studies and research activities we invite members to complete. Therefore, irrespective of what is done to create a positive membership experience in the periphery, it is this core activity which defines the nature of the relationship and members’ perceptions of the offering. This presents a particular challenge to access panels for whom exerting control over research mix, quality and content is more difficult.

At our firm, a panelist satisfaction study was devised and fielded on the Angus Reid Forum, our large Canada-wide panel, to better understand the factors that contribute to a satisfying experience as a member. Almost 18,000 members rated their satisfaction with a number of aspects of their membership and participation in the Angus Reid Forum as well as their overall satisfaction and likelihood to recommend panel membership to someone else. Shapley regression analysis was used to determine which factors were the key drivers of overall satisfaction, i.e., which aspects were those which, if improved upon, would have the biggest impact in the overall satisfaction of panel members, and vice versa.

Nine factors were measured for satisfaction:

1. Survey topics

2. Frequency of surveys

3. Quality of surveys

4. Length of surveys

5. Amount of time given to respond

6. Incentives offered

7. Newsletters/communications   received

8. Look and feel of surveys

9. (Whether they felt satisfied that) The input provided is valued

Of these, the two with the most impact on overall levels of satisfaction were “quality of the surveys” and “survey topics,” respectively. This highlights the centrality of the research itself to defining the panel member’s experience. In addition, the relatively lower impact that incentives had on overall satisfaction, as well as newsletters and communications, indicates that these would not adequately compensate if study quality or study topics were rated poorly. (Happily for us, they were not!)

Needs and objectives

When we help a client develop their own custom panel, one of the essential components to the panel-planning process is to understand how the client’s research needs and objectives map out into a research program for the panel’s annual life cycle. This program takes into consideration both the pieces of research required to gather the data needed by the client for decision-making as well activities required for panel engagement. In planning out the full panel program in this way, research activities are not looked at as isolated studies but rather how they contribute to the overall experience of belonging to the panel.

Although more complex, we took a similar approach in running restricted-access market panels. These wide-market panels support our research division across multiple verticals but are not sold out directly as sample. Care is taken to consider the overall research mix that is presented to panel members: consumer research is interspersed with opportunities for members to respond to current topical issues, and a range of lighthearted questions are developed to sprinkle into the longer, more factual studies. If the survey topics are one of the most important factors in driving panelist satisfaction, then some measure of control and planning is essential to ensure that members’ expectations are understood and met.

Typical frustrations

Most panel owners will be familiar with the typical frustrations that panel members report with surveys. These include disqualifications, questions that don’t allow the respondent to express their view, repetitive or tedious questioning, asking too much of cognitive ability or feats of memory, poor translations, unclear, stilted language and overly lengthy studies.

Of these issues, some are more clearly matters of poor research practice; others seem more endemic to the research process. One of the prime examples of the latter is where we terminate an interview because the invited panel member does not meet selection needs or where quotas for those with matching characteristics have been filled. While our broader message to members and potential members typically communicates the importance of their views as individuals, our screen-outs typically send out quite the opposite message: “Actually as it turns out your views aren’t that important to us and after all. You people who are male and over 50 are all the same anyway.”

Faced with this slightly insulting message over and over, while at the same time being told that their individual views should count, it is not especially surprising that members soon become inclined to find ways to avoid disqualification. This is compounded by the lure of a reward which they are also being denied by disqualification, despite showing their willingness to take part. Recognizing the questions that will determine eligibility and selecting the required answers is of course cheating - but is a fairly understandable reaction to the mixed messages and disappointment of exclusion.

It is a combination of market pressure and research requirements that lead to the need to quota or disqualify, and it is not easily eliminated - though it is eminently reducible. It is also a problem which can be addressed by panel owners themselves to some degree. Better profiling and advance screening mechanisms allow for more and more profile variables to be known in advance. At our firm, the wide-market panels operate monthly panel-wide screening surveys which provide opportunity for more tactical, project-specific pre-screening. On some panels, such as those operated by Opinium in the U.K., members are rewarded for their participation regardless of whether or not they qualify. Other panels seamlessly provide an alternative set of questions to the respondent.

In the hands of researchers

Many of the other aspects that constitute the overall quality of a survey from a panelist’s perspective lie far more in the hands of researchers than panel owners. For example, numerous experiments have shown that beyond a particular length of survey, data quality will diminish as the respondent’s attention (and no doubt also enjoyment) decreases. Putting to one side the quality of data within the particular study, if repeat experience educates the panel members to expect long studies, they will be less inclined to relish the prospect of the next study. They may well choose not to take it, or act in a satisficing manner from the start, providing suboptimal responses in order to proceed through the survey faster.

Regardless, many online panels, faced with a client request for a 30-minute study to test 20 slightly varied iterations of a product package using rating scale grids of 30 items each, will still be willing to field the study in the knowledge that if they don’t, one of the other players in the market will. Damage to their own asset is not well accounted for because it is not based on any one study but on the accumulation of negative experiences, which poisons the reciprocal deal between panel owner and panel member. The clients themselves do not consider these longer-term effects because their relationship with the particular respondents is one-off. But it is the client who may ultimately suffer the consequences. If willing research participants are increasingly subjected to the kinds of studies that encourage reciprocal satisficing and inattentive behaviors, there will be an overall increase in studies which contain bad data - because the pool of willing participants is in fact, quite limited.

Collective effort

So what can be done to improve on survey quality from a panel member’s perspective? A collective effort is required within the industry to educate researchers and sample-buyers on why survey quality matters and what it consists of in particular when it comes to online panel-based research. As well as setting the standards for what we expect of our respondents, we need standards for what our respondents should expect of us. Panel owners should work together to create and adopt a set of rules of engagement to define what is and is not acceptable in terms of fielding an online study.

Some examples of these rules could be:

1) Unless specific techniques are employed to maintain the engagement levels throughout, studies over a certain length should:

                                                        a. Be split and run as two studies (after all, one of the great benefits to panels is the ability to go back to the same sample with follow-up questions!).

                                                        b. Be designed using split-sampling approaches with data modeling so that each respondent takes a different, shorter path through the study.

2) Studies should not be finalized until the panel owners have had the opportunity to review them and to make suggestions to ensure that the tone and language is encouraging and appropriate.

3) Pre-testing should be used to check for survey design issues and studies modified in response to panel feedback or a review of where respondents have dropped out.

4) All studies should include the opportunity for panelists to rate them according to quality, length and interest. This feedback should be passed on to the client and those clients whose studies are consistently rated poorly by respondents should be given additional guidance on how to make their studies panel-friendly (or even charged a premium!).

5) Panel owners should provide some guidelines for designing engaging studies, including perhaps examples of types of questions which can be employed within studies to add interest and to stimulate the respondent to continue, and recommended alternatives to some of the traditional ways to ask questions that are particularly fatiguing to panel members. (We recently produced a podcast covering some of these simple, practical tactics that can help to mitigate the effects of respondent fatigue through survey design. It can be downloaded at www.visioncritical.com/podcast/ ).

Such a set of rules of engagement might receive pushback from many clients but the hope would be that those who are smart will realize that panels that adhere to them will be less subject to some of the problem behaviors that lead to data quality concerns.

Problem behaviors

Peeling back one layer of the onion helps us to understand how we as researchers and panel owners may in fact be responsible for creating some of the conditions that encourage problem behaviors in respondents and therefore poor-quality data. But even if we get better at meeting our end of the deal as it stands currently, will this be enough to solve some of the issues surrounding online data collection - in particular, the too few taking part in too many? As already alluded to, willing participants in online research are not in fact an infinite resource: any panel member turned from a genuine and thoughtful contributor into a satisficer through repeated negative survey experiences is not that easily replaceable. By one panel’s actions in disengaging a member, the rest of the industry may have lost a willing research participant. More generally, while Internet penetration may continue to increase in many markets, this does not necessarily equate to an increase in the pool of potential research panel recruits.

Given the finite amounts of free time and energy that individuals have to spend, the offering we make to research participants is also competing for their attention with other interesting activities and gainful opportunities. It is not clear that taking part in online research panels remains competitive in terms of the experience they offer Internet users; a key signal of this being the growing problem in recruiting and engaging younger and more Internet-savvy cohorts to participate in research panels.

In the main, Internet research remains a top-down experience: researchers pose set questions and respondents pick from a limited range of options to submit their answer. Also in the main, surveys remain linear and text-based. This is now at stark odds with Web 2.0 activities which revolve around user-generated content, individual expression and personalization and with the rich graphical interfaces, movies, music and interactive experiences that can now be found in the online universe.

Appeal that will engage

If we want participation in research projects online to have the kind of appeal that will engage those exposed to these types of online activities, we need to start using the same kind of Web technologies and approaches in conducting our research.

A division within our firm specializes in creating rich-media applications for research, such as more visually engaging versions of traditional research question types. This has filtered throughout our technology so that now, clients of our panel and research platform Sparq have self-service access to a variety of “visual questions” to deploy in their studies. Through a number of parallel test studies, use of these types of questions has been demonstrated to improve respondents’ experience of participating in a study, to make them more likely to want to participate in a future study, and in fact to perceive that the time taken to complete the study was shorter (even when they actually took more time than those taking a “flat” version). While more experimentation is needed to understand how collecting data in a more visually engaging way might differ from flat, text-based questions, unless we start to meet minimum expectations with respect to the visual and user-driven appeal that Web-based technology makes possible, online research participation just won’t compete for attention.

Harder to argue

A final note: one of the advantages so frequently cited for Internet research over traditional offline methods is convenience: participants can complete surveys at a time of their choosing, in a place of their choosing, without the presence of an interviewer, therefore requiring much less intrusion than say, a telephone interviewer calling at dinner time. With the squeezing of fieldwork times, imposition of hard quotas and minimum participation requirements before a member gets purged, it becomes harder to argue that the commitment we ask of online participants is particularly minimal. But putting this to one side, the nature of Web browsing is also changing - the rise of Web-enabled phones means that browsing is an activity that can and will increasingly be done in those odd moments of free time, when on the move or waiting in line - leaving the more deliberate time sat at a PC for the more purposeful activities we now conduct online whether banking, e-mails, sharing photos, etc.

The next successful iteration of online research will be that which finds a way to re-widen the appeal of taking part in research in two key ways: firstly in being able to genuinely claim to provide an engaging experience, by utilizing the technology that the Internet can offer to take research well beyond CATI or pen-and-paper-style surveys; and secondly, in finding ways to reach and research participant more fluidly, in context and at times of their choice and convenience. From trying to force our research participants to conform to our requirements, we need to ask what the participant requires of us to make the exchange of their time, energy and ideas a fair and positive one.