Measuring trust in consumer AI use
Editor’s note: Wendy Smith is a senior manager of research science at SurveyMonkey.
AI adoption is climbing. That doesn’t mean trust is.
One in three Americans now uses AI daily or weekly, according to the latest SurveyMonkey AI Sentiment Study. At the same time, skepticism is intensifying. Yet many organizations still treat usage as a proxy for approval.
That assumption misses what may be the most important signal in today’s AI landscape: ambivalence.
Ambivalence is a measurable signal
Too often, AI sentiment gets framed as a race between adoption and trust, as if one will inevitably follow the other. Our data suggests something else: Americans’ views are becoming more nuanced as they brace for a long-term shift.
According to SurveyMonkey research, nearly all Americans (98%) expect AI to impact the world, and nearly half (46%) cite it as one of the top societal issues for 2030, second only to economic stability (51%). For Gen Z, the impact of AI is the No. 1 concern for the future. Yet this isn't a simple story of pessimism. Three in five Americans believe the impact will be both positive and negative, up from 35% a year ago.
This mixed impact mind-set matters because it changes how people behave. They use AI, but they double-check the outputs. They value speed but still expect human oversight. They opt in – with conditions.
For researchers, this means binary questions like “Do you trust AI?” are increasingly insufficient. The real insight lives in the gray area: When does AI feel helpful? When does AI cross a line and what breaks trust fastest?
When AI credibility fractures, consumers reallocate their trust
The AI Sentiment Study shows that the fastest trust breaker for consumers is non-consensual data use. Four in 10 Americans (38%) say an AI assistant storing or sharing personal data without consent would cause them to lose trust immediately in the company behind it. That outranks every other concern we tested.
Other trust breakers are also tied to control and transparency:
- 23% cite the inability to transfer to a human agent.
- 14% point to a lack of transparency about whether they’re interacting with AI.
- 11% say generic or scripted responses.
Once these experience failures fracture trust, users that don’t abandon AI altogether are quick to switch tools, as evidenced by the recent mass exodus of ChatGPT in favor of Claude. Following ongoing privacy controversies surrounding ChatGPT – and Claude’s simultaneous refusal to allow the Department of Defense to use its AI models for domestic surveillance or autonomous weapons – many ChatGPT users cut ties with the tool and took their digital memory with them.
When credibility fractures, users don’t hesitate. They reallocate their trust.
High-stakes decisions expose where trust really stops.
To see where Americans draw the hardest lines around AI, look at hiring.
Our research shows overwhelming resistance to AI operating without humans in recruitment decisions. Nearly nine in 10 Americans (87%) want human involvement in the job application process. Half (48%) would trust a human with some AI processes, while 39% would only trust a human, with no AI influence at all.
Only 9% place most of their trust in AI.
Comfort levels drop sharply as AI’s role becomes more consequential:
- 31% are comfortable with AI identifying potential candidates.
- 26% are comfortable with AI screening resumes.
- 6% are comfortable with AI conducting interviews.
- 5% are comfortable with AI making final hiring decisions.
At the same time, there’s a striking asymmetry: Only 25% of Americans trust AI to evaluate candidates fairly, yet 61% would use AI themselves to help get a job, most commonly to optimize a resume.
This gap tells us something important. People are far more willing to use AI as a tool than to be judged by it. Control matters. Accountability matters. When the stakes are personal, hands-on human involvement matters most.
What consumer AI trends reveal for market researchers
AI adoption metrics alone no longer tell the full story. Satisfaction scores can hold steady while skepticism grows, and usage can climb even as confidence erodes.
This pattern extends beyond AI-specific tools. According to the SurveyMonkey Trends 2026 report, Americans increasingly expect transparency and human accountability as technology plays a larger role in decision-making, and confidence drops when those expectations aren’t met. In other words, speed alone doesn’t build trust. Context does.
For researchers, this creates both a responsibility and an opportunity.
- Measure ambivalence explicitly. Continuous research is especially critical here. One-off snapshots miss how sentiment evolves as people gain exposure, experience failures or encounter trust breakers firsthand.
- Study where and why AI is acceptable. Acceptance is highly contextual. AI can feel fine in low-risk moments and deeply uncomfortable in high-stakes ones. Understanding those differences helps organizations design AI experiences that build trust.
- Research transparency itself. People want disclosure, but how it’s delivered matters. When transparency feels performative, it’s not reassuring. When it’s clear, specific and paired with human accountability, it can stabilize trust even amid rapid change.
Consumer ambivalence is a signal worth listening to
The most dangerous assumption organizations can make is that AI familiarity will eventually turn into trust. Our data suggests the opposite – as AI becomes more common, people are paying closer attention.
For researchers, the path forward is about surfacing sentiment clearly and early, so trust breakers become visible before they turn into brand liabilities.
AI will keep moving fast. The question is whether we’re measuring the right signals to understand how people actually feel about it.