AI strategy: The next step for Western markets
Editor’s note: Ellie Tehrani is CEO – Kadence Americas – at Kadence International, San Francisco.
Low trust in AI is limiting the West’s ability to compete.
In markets like China and India, artificial intelligence is embedded into how people learn, work, and imagine progress. In the United States and the United Kingdom, it’s still viewed with suspicion, framed around job loss, privacy concerns and loss of control, a narrative shaped more by anxiety than experience.
New findings from “AI’s great divide: East vs. West,” a global study by Kadence International, reveal how far behind the West is falling, not in infrastructure or innovation, but in belief. Just 21% of U.K. and 29% of U.S. respondents say they have expert knowledge of AI. In China and India, those figures leap to 92 and 90%. The metaphors people use reveal deeper emotional cues: in the East, AI is a “hero” or a versatile “Swiss Army knife.” In the West, it’s more often seen as a “sidekick.” In China, many describe AI as a “wild card,” reflecting both excitement and unpredictability.

This isn’t only about adoption. It’s a challenge in which research and technology must work together to close the gap.
How East and West see AI
How people describe AI signals alignment or resistance.
In the study, respondents across five markets were asked to select a metaphor that captured how they perceive AI’s role in their industry. In India, nearly three-quarters chose “hero.” Respondents in the United States and United Kingdom gravitated toward “sidekick,” indicating a view of AI as secondary – useful but not transformative.
These metaphors map directly to behavior. Those who see AI as a “hero” are more likely to report expert-level knowledge, workplace integration and optimism. The “sidekick” framing, common in the West, coincides with limited usage and persistent concerns about control and consequences.
Perception drives behavior. When AI is seen as empowering, users engage more deeply and creatively. When it’s seen as a helper or a hazard, its utility shrinks.
What’s really holding AI back in the West

AI uptake in the U.S. and U.K. is broad but shallow. Many workers interact with AI tools but lack confidence. As earlier data shows, this knowledge gap shapes regional trust and usage patterns.
The issue stems less from talent and more from structure. In India and China, AI literacy is part of national strategy, with students learning machine learning basics, data ethics and real-world applications early, making fluency systemic, not supplementary.
By contrast, Western education is a patchwork, leaving many with tools they don’t fully understand or trust. The lack of structured onboarding in workplaces compounds the problem. Sixty-five percent of global respondents said they are eager to deepen their AI knowledge, and more than half believe practical guides or training resources would help, but those resources are often missing or not tailored to job functions in the U.S. and U.K.

Without clear education pipelines and role-specific support, skepticism hardens and meaningful engagement stalls.
How companies can reframe AI risk as opportunity
Risk sensitivity shapes how AI is understood in the West more than technical capability. Concerns about data privacy, bias and job automation consistently overshadow benefits. In the study, worries about data misuse were nearly twice as common as fears of job loss. This mind-set shapes both deployment and discourse, creating a feedback loop that reinforces anxiety.
News cycles amplify this narrative, rarely highlighting everyday benefits. According to Pew, most Americans feel both excited and concerned about AI, but only 24% trust companies to use the technology responsibly.
In high-trust markets, safety is assumed. In low-trust ones, safety must be proven. For Western audiences, AI must show up as transparent, predictable and user directed. Branding it as intelligent or efficient doesn’t land. What matters is control.
For brands and researchers, the priority is making AI practical and relatable. Not all skepticism can be solved with messaging, but better design, clear accountability and evidence of real utility can soften resistance.
Why researchers must lead the AI conversation

The challenge in Western markets isn’t exposure to AI, it is the absence of a framework to make sense of it. Even frequent users often can’t articulate how the technology works, its limitations or how to evaluate its output. This understanding gap is where researchers hold influence.
Qualitative teams can uncover where resistance lives and what messaging resonates. Segmentation research can pinpoint which groups view AI as empowering versus intrusive. Behavioral data can trace how risk sensitivity alters engagement, even among skilled users. These are upstream interventions critical to shifting public narratives.
The study shows a progression: safety builds confidence, confidence drives usage and usage delivers value. But perception must move first. Without that shift, tools remain underutilized and the narrative stagnant.
Reframing should focus less on power and more on purpose. People respond when AI is presented as a way to gain back time, reduce cognitive load or amplify creativity. These emotional outcomes unlock adoption, and researchers are best positioned to surface them.
Age differences add another layer. Younger professionals in the study reported higher preparedness and curiosity, while older respondents showed more skepticism and lower confidence. These contrasts highlight the need for segmentation strategies that address motivation and expectation, not just technical skills.
Turning AI anxiety into empowerment
When respondents were asked what AI could deliver at work, they didn’t ask for speed or precision. They wanted better work-life balance, faster skill acquisition and more creative output. These aren’t technical gains; they’re human ones.
Confident users reported the highest optimism and usage, while those describing AI as a “wild card” or “sidekick” were more hesitant and reported lower perceived value. Their expectations were shaped less by direct experience than by narratives of bias, opaque design and loss of human oversight.
The most effective path forward is removing abstraction and focusing on individual goals. Adoption depends on emotional relevance, whether the user is a retail manager forecasting inventory, a health care professional triaging cases or a marketer testing segments. What can I achieve with this tool?
Helping Western markets catch up on AI
Adoption without confidence leads to surface-level engagement, and confidence without guidance leads to misinformed optimism. Western markets now sit at the intersection of both. AI tools are present but often misunderstood or underused. Most workers are willing to learn, but the infrastructure to support that learning remains inconsistent.
The solution isn’t just access – it’s structure. National policies that integrate AI education into primary and secondary curricula, standardized onboarding for professionals and industry-specific guidelines are essential. Without them, confidence stalls and perception hardens.
For brand-side researchers, the opportunity lies in understanding how people experience AI, not just how they use it. Resistance often begins where support ends – at the point of confusion, lack of transparency or fear of replacement. Better segmentation, targeted messaging and longitudinal sentiment tracking can address these gaps.
When research and technology move in step, hesitation turns into confidence, and anxiety into opportunity. That alignment is how the West not only closes the gap but sets a new standard for AI adoption.