Listen to this article

How to track thoughtfulness 

Editor’s note: Jennifer Reid is co-CEO of Rival Group.

I keep bringing up thoughtfulness because I think it’s the most important thing we’re not measuring. When we talk about the health of a panel or community, the conversation almost always stops at engagement: response rates, completion rates, churn. Those are fine, but they’re only telling us who showed up. They don’t tell us if the answers we got were meaningful.

Showing up isn’t the same as thinking. Someone who speeds through a survey or leans on AI to fill space technically counts as “engaged.” But the output isn’t going to help anyone make smarter decisions. In a world where increasingly more analysis is automated, mistaking surface-level completion for quality is a problem that compounds fast.

Understanding the depth and clarity of answers

To make thoughtfulness tangible, we created a 10-point scoring system that evaluates open-ends for relevance, specificity, clarity, empathic tone and more to recognize when an answer is already high-quality to avoid piling on unnecessary follow-ups. If the score is low, an AI tool probes with a contextual question to draw out more detail. If it’s high, we move on.

This approach respects the effort participants put in and sharpens the quality of what researchers receive and the difference is striking:

  • Traditional open-ends averaged 3.69 on our thoughtfulness scale.
  • Conversational text answers improved to 4.95.
  • AI-probed responses reached 6.21.
  • Video responses topped the list at 6.70.

Alongside higher scores, responses were also longer: 2.5x longer with conversational design, 5x longer with AI probing and nearly 8x longer with video. Side-by-side reviews confirmed that themes pulled from conversational surveys were rated higher on actionability, clarity and insightfulness than those from traditional surveys.

Tracking thoughtfulness

Thoughtfulness can be tracked across time and communities. Panels could be benchmarked not just by participation, but by the depth of the answers they generate. High-scoring individuals could be prioritized for projects that demand richer qualitative feedback. Incentives and rewards shouldn’t just recognize that someone completed a survey, but that they took the time to give meaningful input.

So much of research today leans on qualitative inputs at scale. The numbers are important, but the real breakthroughs come from the nuance and context that only thoughtful answers can provide. As AI becomes a bigger part of our workflow, the quality of those inputs will only matter more.

The shift ahead

Engagement will always matter, but it can’t be the whole story. It’s time to start treating thoughtfulness as a KPI and include it in your regular reporting. You can use it to guide when you dig deeper (or don’t need to) and recognize and reward it when you do see it. Doing so raises the bar for how we define quality in our industry, while setting us all up for better analysis, more confident decision-making and AI systems that are trained on inputs that actually matter.