Editor’s note: Guy Wates serves as director of operations and programmatic at Measure Protocol, London. 

I’ve spent the last decade of my career as a sample buyer with both GfK and Zappi. But today I find myself again dealing in the lifeblood of the market research industry: respondents.

In truth, so much has changed that the industry seems almost unrecognizable. When I originally worked on the supply side, it was well before efficiency was at scale. Since then, pressure to reduce costs and increase speed have caused a hyper-focus on ways to achieve efficiency. But are we looking in the right places to achieve this end?

It is clear that automation in online sampling is a good thing. However, using it as a short-term solution has long-term consequences that must be considered. Here's why:

  • Respondents can become a commodity. We’re seeking to drive costs down. As a byproduct of this, we've created routers, qualification profilers and have stopped treating respondents as real people. Our quest for efficiency has caused major inefficiencies for the respondent.
  • Poor respondent experiences compound the problem. Terrible user experiences are bemoaned by smart market researchers everywhere. The issues are many, including poor levels of respondent compensation, lengthy questionnaires, late disqualification in surveys, bouncing through routers from one survey to the next and being asked the same question multiple times. 

It's not that great to be a respondent. And the good ones who want to participate honestly and get paid fairly are leaving in droves. Budgets are spent trying to subject new respondents to the same, broken ecosystem instead of using funds to find a better way. We're left with ever-increasing pools of respondents who have self-selected to stay engaged in this system and learn behaviors on how to get one over on us. We shouldn't ponder why we see some quality issues in our industry.

I’m the first to admit that during my decade of work on the buyer side, I fueled the problem by not truly thinking about the real people behind the CPI. Now on the other side, I can see that we need to act or face a future where we have no one to share opinions and data with our industry. (The frog in a pan of slowly boiling water metaphor comes to mind.)

A different kind of efficiency

We should be seeking a different kind of efficiency that is designed to drive participation in research back up and, therefore, boost quality as well. The solution starts with the user experience. Everyone in the value chain has a part to play, from the panel company to the research buyer. 

There are a few common principles that we should be following to achieve the goal of a better user experience. Some of these include: 

  • Profiling. Use profiling that’s already there and been validated. Don’t ask people to restate things we already know about them, and don’t insist on additional profiling outside the intended survey. We shouldn’t be asking people extra questions before and after the survey just because we can.
  • Routing. Insist on no routing. Participants shouldn’t have to endure being bounced from one survey to another. We’ve seen examples where someone has done this for 12 full minutes with no positive outcome.
  • Shorter experiences. Disqualify participants within the first one or two minutes of the survey, not seven minutes in. This includes all reasons: screened out, duplicate, quota full, survey closed, etc. In addition, keep surveys as short as possible and be honest with people about how long the survey will take. Include the entire experience time – any parts of the user journey that may take up time, not just the questionnaire.
  • Fair compensation. Pay people fairly for their time. Insist that at least half of the cost of the sample gets back to participants. 
  • Platform optimization. Truly optimize surveys for mobile. Always test the entire survey experience on Android and iOS devices. Don’t force people to do surveys on PCs or laptops – let them choose.
  • Ask the right kind of questions. Limit the use of open-ended questions and always give the participant the option to skip the question if possible as these are time consuming and increase the chances they will be forced to provide poor feedback. Think carefully about your trap questions – they shouldn’t stand out as unusual and detract from the participant experience. Limit the amount of text you need people to read, and instead opt for images and video for instructions. Use everyday language and a conversational tone. 
  • Transparency. Be transparent about the research, why you’re collecting the data and what it will be used for. Also tell the participant who it’s for – if you can’t do this at the start then reveal it at the end of the study. Be transparent about why someone didn’t qualify for a study, including if they failed quality checks.
  • Privacy first. Build privacy into your design, not just because it’s a regulatory obligation. Give people granular controls over what they share and when they share it. Treat data as belonging to participants not to clients. 

Build trust with respondents for better quality

Much of the above can be achieved if you own the respondent journey from A to Z. Creating an ecosystem that prioritizes the user experience and fair compensation models, and is built on principles such as data sovereignty, privacy and transparency, leads to a more engaged user base. This results in higher quality outcomes, because great experiences lead to the participant trusting the researcher, which in turn leads to participation. In this kind of environment, the researcher can also start to trust the data they are collecting from the participants.

Poor user experience leads to erosion of trust which is not good for the market research industry. Trust is the new oil for a lot of businesses and it’s fundamental we have it in research. We’ve observed better quality in all fronts in an app environment built on these trust principles. Participants experiencing these principles take more time over tasks, pay more attention and are less likely to over-claim in surveys. Our recent research on this shows how quality can be improved by focusing on the long-term respondent experience. It’s time for a change, on both sides of the marketplace.