The path to trustworthy insights in the age of AI
Editor’s note: Bob Fawson is founder and CEO of Data Quality Co-Op. He has held executive strategy, operational and product roles at Numerator, Dynata, SSI and Opinionology. As vice board chairman at SampleCon and a strategic advisor to innovative market research companies, Fawson enjoys contributing to the next wave of insights innovation.
A study released earlier this year highlights a disconnect that should concern anyone who relies on data to drive decisions. While 71% of organizations with formal data strategies and governance report high trust in their data, only about half of those without such structures can say the same. The gap reflects a deeper truth that goes beyond technology or tools: Confidence in data is a product of systemic rigor.
In an era where generative AI, automated analytics and real-time dashboards are transforming how decisions are made, that confidence is becoming as important as the insights themselves. Across industries, leaders are recognizing that long-standing data quality issues now carry greater consequences. As automation accelerates, unreliable data doesn’t just skew analysis, it delays decisions and amplifies downstream risk.
Data confidence: The era of snapshot quality is ending
For years, much of the research world treated quality as a series of isolated checks.
Attention screens, speed flags, trap questions and device fraud signals all served as useful tests of whether a respondent looked legitimate when they entered a survey. They were effective in a world where data providers had direct, ongoing relationships with the consumers they surveyed.
But the data environment has changed. Participation is distributed across suppliers, panels and sources, and at the same time AI makes it easier to falsify good-looking data. And the most consequential quality issues are rarely obvious outliers. Small, repeat inconsistencies within data that otherwise appear acceptable skew results and lead to suboptimal decisions.
This mirrors broader trends in data governance. Recent planning insights indicate that 67% of organizations say they don’t completely trust the data they use for business decisions, even though quality issues are consistently flagged as a top challenge.
Spot checks still matter, but they capture only part of the picture. Confidence in the data depends on what happens across studies, not just within one survey.
Systems thinking and trust signals
What does this mean for research teams? In practical terms, it suggests a shift from a checklist mind-set toward a systems mind-set – treating data quality not as a series of isolated events, but as a continuous pattern.
In other domains, similar transitions are already underway. Business intelligence teams build governance frameworks that track lineage, consistency and completeness over time. Analytics leaders invest in observability to catch issues before downstream consumers raise alarms. AI governance becomes a board-level priority precisely because once trust erodes, it’s costly to rebuild.
Within the research community, this pattern has emerged already. The conversation is moving from: “Did the respondent pass these checks?” to “How do we characterize confidence in this data?” The distinction is between evaluating a single response and understanding participation patterns over time.
This shift is reflected in the introduction of scoring models similar to a credit score in the financial industry. Built on observed behavior across the data ecosystem and across time, this kind of approach brings technical fraud indicators, in-survey behavior and participation history together into a single metric that communicates respondent trustworthiness. Rather than replacing existing quality checks, it can provide a common framework for interpreting them with historical context.
Why now, not later
There are several forces converging to make this shift more than just theoretical:
- AI is being embedded into decision workflows faster than most organizations are putting the governance in place to ensure the data feeding those systems is reliable.
- Budget pressure has increased scrutiny on research outputs, leaving less room for rework or late-stage quality issues.
- People using insights across marketing, operations and leadership roles increasingly want to understand not just the findings, but how confident they should be in them.
Quality assessments: What this looks like in practice
This shift focuses on how quality is evaluated and when that evaluation takes place.
Instead of treating each survey as a standalone event, participation history is considered alongside technical and in-survey signals to provide a full view of respondent reliability. The emphasis is on adding context to quality decisions, rather than relying solely on what can be observed within a single study.
For example, the approach I use does this by:
- Retaining participation history so routing and inclusion decisions reflect how respondents have behaved across studies.
- Examining aggregated behavior patterns to identify consistent quality issues that do not surface within individual projects.
- Communicating quality as a measure of trust grounded in observed behavior, rather than as a pass or fail judgment applied after fieldwork.
This keeps quality assessment closer to the source of data collection and reduces reliance on post-survey cleanup.
A more honest approach to quality
The research industry has always taken data quality seriously. What is changing is how quality is understood and where it is evaluated.
As participation becomes more distributed and decision-making becomes more automated, quality can no longer be assessed only at the level of individual surveys. The signals that matter most are increasingly cumulative, shaped by how respondents behave across studies rather than within any single interaction. When quality is grounded in observed behavior over time, confidence becomes easier to establish and explain.
Trust, in this sense, is not an abstract ideal. It is the condition that allows data to be used with confidence, decisions to move forward without hesitation and insights teams to stand behind the work they deliver.