Listen to this article

The insights stack is breaking

By Daniel Graff-Radford, CEO, Discuss

A few weeks ago, I found myself in a familiar situation: sitting across from a senior insights leader, notebook open, trying to ask the right questions – and then get out of the way.

Our executive team had been traveling across North America and Europe, meeting with nearly 30 customers one-on-one after the merger of Discuss and Voxco. These weren’t roadshows or demos. They were listening sessions. The kind where you stop talking about what you’re building and start paying attention to what people are actually trying to do.

What I heard, repeatedly, was not subtle.

When your career is built around valuing research or rigor but you’re struggling because the system you’re operating inside – tools, workflows, org design, expectations – was built for a slower world, it’s beyond painful. 

A world where an eight-week cycle was acceptable because the market didn’t shift underneath you while you were still writing the report? That world is gone.

And 2026 is going to be the year this becomes obvious to everyone, including the people who approve budgets.

The real problem isn’t qual vs. quant. It’s latency.

The industry loves to debate methods. Qual versus quant. Depth versus scale. Story versus stats. If you’ve spent enough time in research, you’ve heard every version of these arguments.

But in the conversations we’ve been having, method isn’t the friction point. Latency is.

Latency is the time between “I’m curious” and “I’m confident.” It’s the lag between a real business question and an answer the business will actually act on. It’s the dead space where context fades, urgency dissipates and momentum dies.

The uncomfortable truth is that many insights teams have become very good at producing outputs that arrive too late to matter.

When that happens, something predictable follows: stakeholders decide anyway. They rely on partial information, instinct, internal politics or last quarter’s dashboard. Research becomes a retrospective justification instead of an input to the decision.

Rather than it being a problem with talent, it's a structural one.

Our industry built a research workflow that assumes handoffs are free. They’re not.

Most organizations still run qualitative and quantitative work in separate lanes. Different tools. Different teams. Different vendors. Different timelines. Often different incentives.

In theory, these lanes “connect.” In practice, they collide at the end, like two trains pulling into the same station without a shared map.

Here’s the cost that rarely makes it onto a slide: every handoff introduces friction.

The qual team uncovers something that should reshape the quant instrument but the quant study is already programmed. The quant team spots an unexpected signal but the qual budget is spent. Someone tries to stitch the story together in a deck full of screenshots, pasted tables and hope.

Everyone is working hard. The system is still leaking value.

That’s why, across our listening tour, I kept hearing a version of the same request: Help us stop treating qual and quant like separate planets.

Not because leaders want philosophical alignment. Because they want decisions that hold up in the real world.

AI isn’t the story but it changes the standard

AI isn’t interesting because it can shave a few days off a task. That’s table stakes. The real shift happens when speed becomes available and expectations reset.

Once stakeholders believe fast feedback is possible, they stop tolerating slow cycles. Not out of impatience, out of necessity. The market isn’t waiting for a beautifully formatted deck.

In these conversations, insights leaders described a new kind of pressure. They’re being asked to provide strategic guidance at the pace of product, the pace of marketing, the pace of the internet while operating with tools and workflows designed for a quarterly cadence.

This is where AI becomes both a force multiplier and a spotlight.

It amplifies what strong researchers can do. It also exposes where the workflow is still manual, fragmented and held together by heroics.

Heroics are admirable. They are not a strategy.

The missed opportunity: reuse

One of the chronic tragedies in research is how much knowledge disappears after each study.

Teams have years of interviews, open-ends, recordings, transcripts, trackers, segmentation work, then six months later someone asks a question that’s essentially already been answered and the process starts over.

Why? Because the data lives in different systems. The context is buried. The effort required to find and trust what you already know is higher than the effort to rerun the work.

The knowledge exists. The operating model just makes it hard to use.

The strongest insights organizations in 2026 will start behaving like knowledge organizations.

They’ll build bodies of evidence that compound over time. Reuse what they’ve learned instead of re-creating it. And connect “why” and “what” without rebuilding the universe every time a new question appears.

Vendor sprawl is becoming a tax on progress

Another theme came through clearly: teams are tired. Not tired of the work. Tired of the overhead. Too many vendors. Too many logins. Too many contracts. Too many tools that are excellent at one narrow task and silent everywhere else. Too many hours spent coordinating instead of thinking.

In a world where every function is being asked to do more with less, vendor sprawl stops looking like sophistication and starts looking like operational debt.

And 2026 is different for one reason: leadership teams are finally connecting the dots. They’re realizing the cost isn’t just subscription fees. It’s latency. Lost reuse. Handoff friction. And the very real risk of making decisions with stale inputs.

So what happens in 2026?

The gap between what businesses expect from insights and what most insight operating models can deliver will widen. Some teams will close it. Others will get squeezed.

You’ll see organizations reorganize around decision velocity, not method silos. Platforms will be evaluated on whether they reduce latency end to end, not whether they offer a flashy feature. Researchers will be pushed toward higher-leverage work as mechanical steps are automated or eliminated.

And a new standard will emerge: insight that is both fast and trustworthy.

What are three things insights leaders can do in 2026 (without burning everything down)?

1. Measure latency like it matters. Not “time to field.” Not “time to report.” Measure the full distance from question to confident action. If you can’t see it, you can’t fix it.

2. Stop letting handoffs be invisible. Map where qual informs quant and where it doesn’t. Map where quant raises questions qual could answer and where it can’t. The friction you ignore today is the friction that will break you tomorrow.

3. Treat research as an asset, not an output. Design for reuse. Make it easier to find what you already know than to rerun the work. If your organization isn’t learning faster over time, you’re just repeating yourself at scale.

Start with human conversations

When launching a product in a new market, you start with human conversations, pressure-test what you hear with a broader study and pull it together into a decision you can defend.

The details change. The pattern doesn’t: curiosity, evidence, confidence.

That’s what insights are meant to enable.

If your systems add friction between curiosity and confidence, you’ll feel it in 2026 because the world isn’t slowing down to accommodate your workflow.

Instead of relitigating qual versus quant, the teams that win will be the ones who remove the nonsense, connect the evidence and help their organizations make better decisions while the decision still matters.

www.discuss.io