Listen to this article

Editor's note: Jim Nowakowski is president of Interline Creative Group. He can be reached at jim@interlinegroup.com.

Artificial intelligence is no longer on the horizon – it’s here, reshaping the way researchers, marketers and analysts approach problem-solving. And yet, despite its growing presence and potential, AI continues to spark a familiar concern: Can it be trusted?

That’s the wrong question.

The real question – the one every responsible professional should be asking – is: How do we ensure that we can trust ourselves to use it wisely?

AI isn’t the threat. It’s a tool. And like every tool, its impact depends on the hands that wield it.

What AI won’t do for you

To clarify what AI is – and isn’t – let’s be specific. Here are examples of what models like ChatGPT or Claude will not do: impersonate individuals or organizations; generate medical, legal or financial claims without disclaimers; create false credentials, certifications or survey data; provide or simulate identifiable personal data; or evade platform content policies when prompted for unethical output.

These aren’t theoretical boundaries. They’re hard-coded into the way large language models are trained and deployed. In real-world use, these guardrails actively prevent – not promote – misconduct. They explain why a prompt is inappropriate, rather than just refusing it.

The issue isn’t that AI can invent falsehoods. It’s that people can misuse AI to simulate credibility. That distinction matters.

A tale of two collaborations

In the past six months, I’ve led two complex B2B research projects that used AI in structured, transparent ways. One involved a question-by-question audit of an existing survey study with plans to replicate it this year. The second was a technical analysis of critical research parameters within an industry’s product specification library.

In both cases, AI was not the source of truth. It was a lens through which we organized inputs, surfaced insights, challenged assumptions and connected data points across regulatory and technical frameworks. The technology made the work faster, clearer, and – importantly – more defensible.

Better speed. Sharper insight. Greater integrity. Not despite AI – because of it.

Fabrication is not a feature of AI – it’s a human failing

Fake participants are not a new phenomenon. Neither is data manipulation. Fabrication in research has been around long before AI and it has always stemmed from human intent, not software.

People can lie. Researchers are people. Researchers can lie. In fact, many of examples of research fraud have occurred with nothing more advanced than a spreadsheet. If anything, AI has spotlighted these issues – by raising new conversations around auditability, verification and bias that some corners of the research world have long ignored.

What are we really afraid of?

AI hasn’t changed that old computing expression about garbage in, garbage out; it’s just made it faster. If the prompts are biased, vague or ethically flawed, the outputs will be too. But that’s not a failing of AI. It’s a failing of process.

What worries me far more than “fake participants” is the very real possibility that fear of AI will prevent capable researchers from using one of the most promising analytical tools in a generation.

We don’t need to fear the tool. We need to build better cultures, training and ethical frameworks around its use.

Here’s an anecdote that says it all:

A few years ago, during a phone interview for a research project on plumbing fixtures, I spoke with an industrial designer. As the conversation evolved, he revealed that he owned a Tesla. I asked him, point-blank: If you had to choose between my client’s product and your Tesla, which would you pick? Without hesitation, he answered: “Your client’s product – the bidet.”

When we played that quote back to the client team, it spread like wildfire. People couldn’t believe it. Fortunately, we had the recording.

Today, with synthetic voice generators and AI-driven deepfakes, that same moment could theoretically be faked. But why would anyone do that?

Faking a quote might generate short-term buzz but it would destroy long-term trust. And trust is the foundation of research. It’s what makes findings usable. Actionable. Believable.

In short: Truth is still the product.

A warning worth heeding

General Jim Mattis once wrote: “Digital technologies do not dissipate confusion; the fog of war can actually thicken when misinformation is instantly amplified.” AI has the promise of making that fog so thick, we lose sight of ourselves as well. But it also has the potential to cut through that fog – if used with transparency, accountability and an unwavering commitment to truth.

So let’s have the conversation about research integrity. But let’s ground it in lived experience, not fear. Let’s build stronger processes, not blame sharper tools.

AI is not the enemy. Misuse is. And the best way to combat misuse is not to avoid the tool but to learn how to use it responsibly – and to demand the same from those we work with.

The real wake-up call isn’t about artificial intelligence. It’s about what we, as researchers and marketers, are willing to claim as knowledge and how we ensure it earns that name.

There’s nothing new under the sun – not even the temptation to misuse a tool. The real test isn’t whether AI can be trusted. It’s whether we can be.