Leveraging AI in research
Editor’s note: Allison Rak is a researcher and entrepreneur with over 20 years of experience in consumer insight. She’s the CEO and owner of Vatoca, a qualitative research and innovation firm.
The explosion of AI can feel mind-boggling. Even for those of us who lived through the dot-com boom and Web 2.0 – in Silicon Valley, no less – this transformation feels exponentially more significant. Whether you're on the forefront or still sorting out where you stand, most of us are at least asking: How do I want to approach this, and what does it mean for my future?
I'm thinking about it for myself and my business, but also for my clients. More than one has asked me to help shape their AI transformation while navigating the flood of tools promising to help us do more, better and faster.
So, when I had the opportunity to attend HumanX – a conference exploring AI and everything it's bringing to our world – I jumped at it. Thousands of business leaders, technologists and innovators gathered to wrestle with how artificial intelligence is reshaping the way we live and work. It was a drink-from-the-firehose experience: absorbing enormous amounts of information while simultaneously trying to figure out what's real, meaningful and actionable.
Here are three highlights with particular relevance for the qualitative research industry.
1. Transformation
As I've been figuring out how to best leverage AI in my work, I'll admit I've been doing it piecemeal – chasing shiny objects, excited by efficiencies. But I left the conference convinced that's the wrong approach. The companies that will thrive aren't those scrambling to adapt tool by tool. They're the ones that step back first.
As researchers, we're well positioned to do this. Our projects – when done right – start with clear objectives, then identify the best approach to answer them. We should apply that same discipline to our businesses. What are we trying to achieve? Once we define that, we can look at the full landscape of tools and determine how to deploy them purposefully.
In other words, we shouldn't be asking how to use AI to write a better discussion guide or write one faster. We should be asking what purpose the guide serves – and then ladder up from there. At the very highest level, what are we trying to accomplish and how can we achieve that best with the tools now available to us?
You know quote widely attributed to Henry Ford: "If I had asked people what they wanted, they would have said faster horses." We need to stop using AI to build a faster horse.
2. AI cannot define taste
This theme came up across so many sessions. There's growing consensus that however remarkable AI becomes, it is unlikely to ever replace human taste. The real question is: Who becomes the arbiter of taste, and how do we hold onto it?
In research, this comes into play in a few different ways. I think we have an opportunity to define when taste is required, and therefore human touch is imperative. At a basic level, if we agree that AI cannot define taste, then we’d better be very careful about if and when synthetic respondents are used. As an industry, we should be determining what types of insights are truly meaningful and valuable, which methodologies are legitimate and where the ethical and professional lines should be drawn. These conversations aren't happening enough – and they may be the most important ones we can be having right now.
3. Authentic participants
This last takeaway is practical. At the conference, I spoke with a company called Certn, which specializes in employee background checks – particularly for candidates outside the U.S. With the rise of deepfakes, they're seeing companies nearly hire people who don't actually exist. They've uncovered cases where a “candidate” who sailed through four rounds of online interviews turned out not to be a real human at all. The deepfakes were good enough to craft a perfect resume, a compelling cover letter and interview responses convincing enough to earn a job offer.
Researchers could just as easily conduct a full series of online interviews with deepfakes.
The good news – Certn shared two simple detection techniques that researchers can use today in any online interview or focus group:
- Ask the participant to put their hand in front of their face. Current deepfake technology handles this poorly, making it an easy red flag to spot.
- Ask the person to move out of frame. Ask them to show you where their computer is plugged in, or anything that requires them to step away from the camera. The visual distorts in ways that expose the deepfake.
Eventually the technology will catch up, and we'll need new methods. Perhaps it will make a case for going back to in-person research. But for now, these are two ways to stay ahead of the risk.
Looking at the future
The weeks, months and years ahead will likely be unlike anything we've seen before. But by approaching AI thoughtfully, proactively and with curiosity – just as we've always done in our work and lives – we can navigate this brave new world with our footing intact. And hopefully also have some fun along the way.