Listen to this article

Put data quality at the core

Editor's note: James Snyder is vice president, trust and safety at Cint. He can be reached at james.snyder@cint.com.

In today’s research ecosystem, the stakes for data quality have never been higher. As digital methodologies scale and evolve, so do the risks from online survey fraud, compromising data quality. Bot activity, survey manipulation, biased sampling and increasingly sophisticated forms of fraud have made trust and safety no longer optional but essential. The platforms, partners and practitioners who succeed in this new environment are those who embed quality not as a one-time check but as a living, breathing principle that guides how teams operate and collaborate.

There’s a tendency to think of fraud as a fringe issue, something that can be addressed with stronger tools or better audits. But in reality, fraud and data integrity failures are systemic. They exploit gaps in communication, process design and incentives. And as research becomes more programmatic, more automated and more dependent on data flowing across multiple layers of vendors and platforms, those gaps multiply.

In this context, traditional quality-control measures are no longer sufficient. Spot-checking results after the fact or relying on historical benchmarks to validate sample integrity can create a false sense of security. The challenge now is to build systems and cultures that assume fraud is inevitable and that are designed to catch it before it causes harm.

Reactive to proactive

One of the biggest shifts I’ve seen in successful trust and safety practices is the move from reactive to proactive. Historically, many research teams treated quality as a final step, a box to check before delivering data to a client. But that approach is fundamentally misaligned with how today’s fraudsters operate. To be effective, trust and safety must be embedded upstream – in how sample is sourced, how platforms are architected and how data flows are monitored in real time.

This requires building cross-functional workflows that enable early detection. For example, operational teams need clear processes for flagging anomalies and escalating concerns. Engineering teams need visibility into how users interact with systems, not just from a product standpoint but from a behavioral and integrity perspective. And client-facing teams need to be empowered to explain quality trade-offs and mitigation strategies transparently.

It also means investing in tooling and data architecture that allow for more granular insight. For example, centralized identity graphs or real-time traffic validation tools can surface patterns that fragmented systems often miss. When tools, vendors and workflows are disconnected, it becomes exponentially harder to detect early signals of fraud. To be effective, data needs to be clean, connected and centralized, not manually reconciled across silos.

Buy-in from every part

Operational changes will only take you so far if your organizational culture treats quality as someone else’s job. One common barrier I see is the siloing of trust and safety work and relegating it to compliance, legal or operations. But building a resilient research ecosystem requires buy-in from every group in the organization.

Quality should be part of the way sales talks about value. It should influence how product managers prioritize features. It should be top of mind when customer success teams are navigating tough client conversations. And it should be celebrated when teams catch something early that prevents a downstream issue, not treated as a disruption or delay.

Fostering this mind-set takes more than a mandate. It takes leadership that’s willing to model transparency and accountability. It takes cross-team communication that goes beyond issue escalation and into knowledge sharing. And it takes a willingness to treat quality as a core value, even when it means making hard choices, like rejecting easy revenue or extending timelines to get things right.

A recurring theme in my work has been the realization that data quality can’t be retrofitted. Once bad data enters a system, it’s almost impossible to unwind the consequences. It may lead to flawed strategic decisions, biased product development or wasted media spend. And often, the reputational damage – to researchers, platforms or clients – is far greater than the short-term gains from cutting corners.

That’s why prevention is so much more powerful than correction. Catching a botnet before it skews a tracker or intercepting a fraudulent sample before it reaches a client’s dashboard can save not just time and money but trust – in your platform, your people and in the research itself.

Building prevention into workflows means rethinking how performance is measured. Instead of just focusing on volume or speed, teams should also be rewarded for raising quality concerns, flagging anomalies and taking the time to investigate. These behaviors need to be seen as a sign of maturity not inefficiency.

Consequences are very real

One of the reasons trust and safety work is often undervalued is because its ROI can be difficult to quantify, especially when it’s working. It’s easy to measure efficiency gains or cost reductions but harder to account for the value of crises that never materialize or reputations that remain intact. And yet, the financial consequences of compromised quality are very real. Cleaning bad data often means rerunning surveys, reanalyzing results and reassessing vendor relationships – costly steps that drain time, budget and trust. It can also trigger investments in third-party tools and protective systems to avoid repeat failures. One of the most infamous examples was Coca-Cola’s New Coke launch in 1985. Based on flawed taste-test data that failed to consider brand loyalty and emotional connection, the company introduced a new formula that triggered public backlash and cost millions to reverse. The lesson: When quality is compromised, the ripple effects extend far beyond the research team.

Low-integrity data doesn’t just affect research accuracy, it erodes confidence in business decisions, strains client relationships and creates downstream operational waste. When marketing teams base campaign strategy on flawed insights or when product teams launch features based on skewed user feedback the impact can ripple across quarters. In regulated industries, the stakes are even higher, where bad data can lead to compliance violations or legal exposure.

Understanding these broader consequences can help reframe trust and safety as not just a technical function but a business-critical one. It’s not just about preventing fraud, it’s about protecting the long-term viability of platforms and the credibility of the insights they produce.

People are at the heart

Too often, when we talk about quality, we focus on tools and frameworks, but people are at the heart of successful trust and safety initiatives. Behind every flag, escalation and early intervention is someone with the right instincts and training to spot anomalies that machines might miss.

That’s why recruiting, training and retaining talent in this space is so important. Teams need people who understand both the technical mechanics of data and the human behaviors behind fraud. And they need to feel empowered, with the authority, incentives and support to take action when something doesn’t look right.

Upskilling also plays a critical role. As fraud tactics evolve, so must the strategies used to detect and prevent them. That means investing not just in technology but in continuous learning: workshops, cross-training and knowledge-sharing forums that help teams stay sharp and adaptive. In a world where trust is constantly under attack, human judgment is still one of our most powerful defenses.

Four practices for strengthening marketplace integrity

If there’s one thing I’ve learned, it’s that no one has all the answers. But there are a few practices I’ve seen make a real difference – regardless of company size, platform model or vertical:

Embed fraud detection into platform design. Don’t rely on external vendors or post-hoc reviews alone. Build fraud resistance into your product architecture – through traffic validation, behavioral monitoring and rule-based triggers. The earlier fraud is caught, the easier it is to act.

Create feedback loops between teams. Often, the insights needed to improve quality are already within your organization, they’re just not being shared. Encourage client-facing teams, data scientists and product engineers to regularly sync on what they’re seeing. Cross-pollination surfaces trends faster and helps close blind spots.

Educate and empower clients. Clients are your partners in protecting quality. The more they understand how fraud manifests and what trade-offs exist between speed, cost and integrity, the more constructive and proactive those conversations become. It’s not about fear, it’s about partnership and clarity.

Design thoughtful, respectful surveys. Buyers, those designing surveys, also play a key role in data quality. While not directly tied to fraud, a strong respondent experience leads to better engagement, more thoughtful answers and, ultimately, cleaner, more reliable data.

Industry-wide collaboration

While individual organizations can do a lot, lasting change requires a broader commitment across the research ecosystem. Fraudsters don’t respect company boundaries and neither should our defenses. Sharing tactics, collaborating on standards and advocating for transparency will benefit the entire industry, not just individual players.

That collaboration extends to how we treat competitors as well. I've had some of the most productive conversations about quality with others in the industry, even those from competing organizations. When our shared goal is protecting the legitimacy of the research industry, our differences become less significant. We all benefit from trustworthy data and we all suffer when it's compromised.

As research becomes more automated, more dynamic and more deeply embedded in how decisions are made, the importance of trust and safety will only grow. We can’t afford to think of it as a support function or a cost center. It’s a strategic imperative, one that touches every part of the value chain.

The organizations that thrive in the years ahead will be those that understand this and act accordingly. They’ll treat quality as a competitive advantage, not just a requirement. They’ll build systems that are resilient by design. And they’ll foster cultures where doing things the right way is the default, not the exception.

Trust and safety isn’t a checkbox. It’s a commitment. One we make every day, in how we build, how we respond and how we lead.