Research shows AI adoption is not just a technical challenge

Editor’s note: Djordjija Petkoski is a senior fellow and lecturer at the Wharton School. James Forr is head of insights at Olson Zaltman.

Despite the bold proclamations and sweeping forecasts, the true impact of AI is far from clear. The technology is amazing, but is the workplace revolution it portends imminent or illusory? 

A recent McKinsey study shows that while 78% of organizations have adopted AI in some form, just one percent have fully integrated it into their operations and fewer than one in five have realized any significant earnings impact. Why?

Will AI fully automate jobs out of existence, or will it serve as a positive force that augments human effort and elevates our productivity?

And human inertia is a powerful force. Even if the tools are ready, are we?

Business leaders are struggling to unravel the answers to these knotty questions. Olson Zaltman partnered with the Zicklin Center for Business Ethics Research at the Wharton School to conduct in-depth interviews with 18 executives from a range of organizations worldwide. Our goal was to explore the emotional terrain of the AI landscape and understand not just what leaders think but also how they feel.

To explore the unconscious drivers of behaviors, we used the Zaltman Metaphor Elicitation Technique (ZMET). The first step was to ask participants to collect a set of metaphorical images that symbolized their thoughts and feelings about how AI will affect them and their organizations. Those images became the springboard for a 60-minute discussion, during which we explored the unconscious meanings behind the pictures and learned more about the emotional experience of being an executive in the dawning age of AI.

From this in-depth qualitative exploration, we discovered three distinct executive mind-sets, all centered on metaphors of motion and movement.

Mind-set No. 1: AI is moving too fast.  

Some executives feel AI is less a tool than an existential threat. They warn of a dystopian future in which unemployment is rampant, workers’ rights are erased and humans become subordinate to the will of technology. It is a world where humans are reduced to vestigial appendages, like wings on a flightless bird.

More immediately, these leaders worry they have already lost standing and relevance. One senior executive, a man in his 50s who works for a large European corporation, noted that his team “felt like idiots” when a young representative from an AI vendor pitched them a complex new offering. In that moment, he empathized with his mother, a small shopkeeper who was so flummoxed by Excel that she kept her books by hand. “I am trying to catch up,” he said. “But I have a feeling I missed something and am not where I’m supposed to be.”

An owner of a small U.S. firm selected an image of seven skiers standing paralyzed in an impenetrable blizzard, each staring in a different direction, to symbolize his feelings of vulnerability:

“You are stuck and effectively blinded. There is no North Star, no direction, no easy answer. The technology is so complicated that even the people creating it don’t fully understand it, let alone the people who are relying on it. 

“No one has seen this movie before. Your experience can’t help you in this moment, so you can’t be a leader in a traditional sense. 

“A colleague who advises us is a tech guy who has been around for 20 years. Every three weeks he has to throw out what he has built and start over. So, experience counts for something, but it counts for much less than it used to.”

Mind-set No. 2: AI is moving more slowly than we think. 

In contrast, this group of executives feels relatively confident about their knowledge of AI. In fact, many of the IT decision makers we interviewed held this mind-set. Although they’re excited about AI’s potential, they caution against the perils of grand promises and breathless hype.

A general partner at a technology venture capital firm asserted, “We forget how much of the magic is actually put together by people.” Moreover, those people often seem more focused on the sizzle than the steak; AI-powered solutions may look impressive on the surface but often prove ill-suited for real business challenges.

These leaders see themselves as the voice of AI reason – or, in some cases, the AI bad guy. They’re frustrated by pressure from less tech-savvy stakeholders to ride the AI wave. The chief digital officer of a conglomerate based in Asia used an image of a tortoise to represent the difficult balance between his desire to incorporate AI more extensively and his concern about moving too quickly. He said much of his job is to cool internal expectations:

“Slow and steady wins the race. People want it to work now, but you have to put foundations in place first.

“People will say, ‘My son can write code in a couple of days. Why can’t AI do the same?’ It’s like, hang on. That’s a totally different scale.

“CEOs and shareholders want to keep up with the Joneses but then they lose interest, which is frustrating. I am trying to protect this turtle, trying to make it run faster without killing it. I want it to be here for the long run.”

Mind-set No. 3: AI is moving at just the right speed.

In a long-ago episode of “The Twilight Zone,” Rod Serling said, “Science fiction [is] the improbable made possible; fantasy, the impossible made probable.” This group of executives sees AI bringing those two realms together.

These are the AI evangelists. They don’t deny the risks or the ethical landmines, but their primary emotion is excitement. They believe AI will amplify human potential rather than diminish it. As one respondent put it, AI could improve our lives on the scale that electricity has.

An American entrepreneur selected the Christ the Redeemer statue in Rio de Janeiro to illustrate the power she feels using AI to stand toe-to-toe with larger, more established competitors:

“AI is about to take us to a higher place. It gives us the opportunity to present ourselves, our visions, our dreams; to understand things we didn’t know at the speed of light. That is exhilarating. That is inspiring. It fills you with hope. 

“Everybody is not going to go to an Ivy League school. Not everyone is going to get a four-year degree. But success should not be predicated on that. There isn’t a secret society of information only for those who have amassed a certain amount of wealth. The information is available to me just like anyone else.

“AI will transform me, my organization and everybody who is connected to me. If I am better positioned as a business leader, it gives me global opportunities. It reminds me of the dreams I had as a young girl being raised in very meager beginnings and tells me those dreams are possible.”

Implications of AI adoption

These emotional profiles reveal that AI adoption is not just a technical challenge – it is an emotional and cultural one. It has implications for hiring, retention, operations and marketing. In times of uncertainty like these, executives must lead with empathy and empathy begins with understanding.

Every organization is its own unique ecosystem, an intricate web of relationships among employees, leadership, customers, corporate values and technology. For AI to be successfully integrated, leaders must grasp how people perceive these relationships and how they believe AI will enhance or disrupt the existing internal dynamics. 

This requires using research that explores organizational psychology and behavior. A psychological profile of the workplace can provide executives with a palette of words, metaphors and emotional levers to deploy when framing new AI initiatives and helping employees develop the hard and soft skills they need to thrive in an AI environment.

Another implication is the relevance of the AIability framework (introduced by Petkoski at the OECD Global Partnership on AI Summit in December 2024). The challenges faced by executives in implementing AI mirror those they have faced in implementing sustainability. Both have reshaped strategy and stakeholder expectations, yet both also face emotional resistance, uncertainty and misunderstanding.  

AIability consists of five building blocks (thriving ecosystems, economic alignment, governance, ethics, leadership and human-centered approaches) that fuse AI and sustainability efforts into a cohesive strategy. 

Ultimately, the path forward for AI will be blazed in part by the people most affected by it – the executives and employees who are the heart of every organization. The ability to understand and acknowledge people’s emotional mind-sets, and to recognize one’s own emotional biases toward AI, may be the defining leadership skills of this new era.