How AI has affected researchers and the insights industry
Editor’s note: Erica Parker is managing director at The Harris Poll.
For years, the insights industry has debated whether AI would automate the work researchers do or expand what they’re capable of. The latest data suggests the answer is both – and the tension between those forces is now the defining reality of research in 2026.
A study of 219 U.S. insights professionals from The Harris Poll and QuestDIY shows AI is no longer an experiment or a side tool (registration required). AI is deeply embedded in the workflow, with 98% of researchers having used it in their jobs in the past year, and 72% using it daily or more.
How teams have already adopted AI
The numbers make one thing clear: AI is already baked into the infrastructure behind modern research, not an optional add-on. Most of that adoption sits in analysis. Researchers are using AI to merge data sources, code open ends, summarize findings and automate reporting. More than half say they save at least five hours a week because of AI, and many credit it with improving accuracy or surfacing insights they would otherwise miss. In a field where time pressure is constant and stakeholders want answers yesterday, those gains matter.
But this isn’t a frictionless revolution. Accuracy, explainability, and privacy are persistent flashpoints. A third (33%) cite data-privacy concerns as a top barrier to adoption and 39% point to errors or hallucinations as a real constraint, not a theoretical one.
This duality – acceleration and risk – is shaping the new norms for research teams. Researchers increasingly describe AI as a “junior analyst” that can move fast and process at-scale but needs supervision. That framing reflects a deeper shift: AI is expanding the surface area of what’s possible, but humans are still the arbiters of what’s true, relevant and responsible.
The effects of AI on the researcher role
The data also signals a broader cognitive pivot for the industry. While researchers expect AI to take on more autonomous tasks by 2030 – drafting surveys, generating first-pass reports, fusing structured and unstructured data – they also see their own roles becoming more strategic. They anticipate spending less time executing and more time validating, contextualizing, translating and advising. In other words, the future researcher is not the person who touches every data point; it’s the person who ensures the output is meaningful.
What’s striking is how consistently researchers emphasize the human-led, AI-supported model. The majority (89%) say AI has improved their work life, but only a minority see it as a standalone decision maker. Instead, they describe a world where AI handles scale and speed, and humans handle judgment, ethics, narrative and business alignment. Far from eliminating the researcher, AI is recasting the role into something closer to strategic counsel.
That evolution won’t happen by accident. The barriers researchers cite – training, time to learn, workflow integration, explainability – point to an industry that needs better guardrails, clearer policies and more transparent AI systems. Researchers are not resisting AI. They are asking for the structure that lets them use it without compromising on quality.
The future of AI in the research and insights industry
The bottom line for 2026 and beyond is straightforward: AI is not replacing researchers, but it is rewriting the job description. The researchers who thrive will be the ones who pair machine – scale processing with human-scale judgment – the ability to ask sharper more strategic questions, validate outputs rigorously and connect insights to decisions with cultural and ethical awareness. Speed is no longer the differentiator; interpretation is.
In an era defined by automation, the most valuable skill in research may be the one AI cannot replicate – knowing what truly matters.