Six graduate hires, six AI roles and a redefined path to insight
Editor’s note: Kelly McKnight is executive director at Verve. An insight leader, McKnight has a background in cultural intelligence and a love of the unexpected.
Few industries are watching AI as nervously as market research.
Much of what researchers have traditionally done – coding text, analyzing transcripts, synthesizing findings – overlaps closely with the kinds of cognitive work large language models perform well. An Anthropic study (March 2026) even identified market research as one of the most exposed industries.
The result is growing anxiety about entry-level roles in research and insights.
Graduate recruitment appears to be slowing, driven by a mix of economic pressure, offshoring and automation. As Liz Norman, chief executive at Elizabeth Normal International has observed, many large organizations simply don’t yet know what the entry-level insight role will look like in three years’ time – and as a result, they are not recruiting for it now.
But something else is also happening.
Over the past six months, Verve has hired six graduates into AI-focused roles – positions that barely existed two years ago. These are not researchers using AI to run traditional projects faster. Their job is to direct, validate and interpret AI-generated insight.
Together, they offer a tangible view of what the AI-ready researcher looks like.
Meet the AI-ready researcher
We spoke to the six researchers – Ben, Daniel, Isla, Joe, Nicole and Serena – about their backgrounds and how they work with AI day-to-day.
One thing became clear immediately: None of them see themselves as researchers in the traditional sense. That alone says something about how different the role is.
Across those conversations, four shifts stood out.
1. A different insights mind-set: Judgement over logic.
The most striking difference between these new hires and established entry-level research roles isn’t technical skill. It’s mind-set.
When asked what matters most when working with AI, one graduate answered simply: “Judgement.”
Several had chosen humanities or social science degrees precisely because they were drawn to interpretation rather than certainty. As one explained, “I preferred the humanities because I like ambiguity … I don’t really like things that are black and white.”
That mind-set turns out to be particularly useful when the colleague you are working with is a large language model.
AI systems are powerful, but they are also prone to bias, distortion and overconfidence. Much of the work involves spotting when something feels wrong – questioning assumptions, adjusting prompts and ensuring AI personas behave in ways that feel recognizably human.
AI outputs are often technically correct but strategically absurd – internally coherent yet disconnected from how real people actually behave in culture and in markets.
In other words, the task is not simply to accept the output, but to judge it.
Both analysis and judgement are forms of research thinking. But as AI takes on more of the analytical workload, human judgement – and comfort with ambiguity – becomes more valuable, not less.
2. A different research workflow: Building worlds.
The second shift is what the work actually looks like.
Spend time with these new recruits and it quickly becomes clear that much of their job is building worlds.
Instead of producing analysis directly, they build the environments in which AI generates insight – creating synthetic populations, designing personas, structuring knowledge bases and developing ontologies that help AI systems reason about behavior.
The goal is to create worlds where AI can behave like recognizable human audiences: consumers, experts or communities whose perspectives can be explored and tested.
Traditionally early research roles looked different. Much of the craft involved learning how to structure insight from evidence – charting data, summarizing interviews and gradually building logical arguments that lead to a clear conclusion.
Both approaches produce insight. But one focuses on building the argument, while the other focuses on building the world that generates it.
3. A different way of learning: Experiments over methods.
The third shift is how researchers learn.
Traditional research training revolves around method – designing surveys, conducting interviews and analyzing transcripts to understand behavior.
The AI-focused roles learn through experimentation.
AI-ready researchers test prompts, compare model responses and run simulations to see how simulated audiences behave. They run A/B tests on prompts, personas or knowledge bases – changing one variable and observing how the system responds.
As one described it, the process often involves “trying things, seeing what happens and refining it.”
The rhythm starts to look less like running a research project and more like running a series of experiments.
The same instincts about bias, interpretation and human behavior still apply. They’re just being exercised in a different place.
4. A different kind of research impact: AI as a leveler.
Perhaps the biggest surprise is how quickly these new entrants are having an impact.
Several described AI as a leveler. When everyone is working with the same models and tools, experience matters less than curiosity and experimentation.
As one of our interviewees said, “Everyone’s figuring this out at the same time.”
Across the interviews, several examples stood out. One team member attracted more than 70,000 views on a recent LinkedIn post, sparking conversations with academic researchers in the field and an invitation to join her university’s AI advisory group.
Another described conducting “frontier work” experimenting with how large language models could be integrated into machine learning pipelines.
Others described contributing directly to senior client conversations – presenting AI experiments and advising teams far earlier in their careers than they expected.
By contrast, the previous cohort of graduates described impact emerging more gradually. Their highlights came from working on interesting and culturally relevant projects – analyzing TikTok food trends, exploring eco-anxiety or contributing to brand innovation work.
Together, they point to something important.
AI is compressing the research career curve.
When much of the mechanical layer of work disappears, early career researchers can contribute meaningfully much sooner.
The entry-level research role is evolving
What these conversations make clear is that the entry-level research role is not disappearing – it is evolving.
For decades, the craft of research has been built on rigorous thinking: understanding people, questioning evidence, recognizing bias and turning complexity into insight. Those foundations remain essential.
What is changing is where those instincts are applied.
As AI begins to take on more of the mechanical work of analysis, early career researchers are spending more time shaping, testing and interpreting the systems that generate insight. In doing so, the experience curve begins to compress. Researchers can contribute meaningfully earlier, not because the craft matters less, but because the tools allow them to engage with it sooner.
The industry may still be debating what the entry-level researcher of the future should look like. But the experiences of these six early career researchers at Verve suggest that future is already starting to take shape.
Just don’t call them market researchers.