The role of AI in screener writing
Editor’s note: Lisa Boughton is the director at Angelfish Fieldwork. This is an edited version of an article that originally appeared under the title “Human vs. AI: What’s the Future of Marketing Research Screener Writing?”
At AQR’s Powering Insights: Fieldwork & Ops Unleashed event, we posed a simple question to attendees: In one word, how do you feel about qualitative recruitment screeners written using AI?
The top three responses? Unsure. Excited. Cautious.
If that doesn’t sum up the current climate around AI in market research, we don’t know what does.
AI is everywhere in our industry right now, used for everything from data analysis to sentiment tagging. But as a fieldwork agency specializing in qualitative recruitment, there’s one area where AI’s presence is especially relevant to us: writing screeners. It’s a core service we offer and something we’re experimenting with, and cautiously so. We wanted to share what we’ve learned so far, both from our own trials and from the broader conversation in the industry. Because while AI-written screeners promise speed and efficiency, there’s much more to consider.
Why does screener writing matter?
Ask any experienced recruiter and they’ll tell you: Screeners are the backbone of successful qualitative recruitment. Yet they’re often overlooked.
A well-written screener:
- Ensures only the right people are selected.
- Establishes a clear, logical flow.
- Prevents mis-recruits (and project derailment).
- Builds rapport with respondents from the outset.
It’s not just a checklist; it’s an art. From tone and sensitivity to clarity and precision, the best screeners are crafted with care and honed through experience. And that’s where the AI conversation becomes more complex.
Where can AI help when it comes to screeners?
Let’s be fair, there are definite upsides to using AI to support screener writing.
1. Speed and scalability
AI can produce drafts in seconds. For high-volume projects or when deadlines are tight, this can be a game changer.
2. Structured logic
Language models are excellent at organizing information and applying skip logic, particularly when given clear prompts.
3. Language variability and knowledge access
AI can adapt tone and wording, translate concepts or pull in background knowledge from a broad knowledge base. This can be helpful when drafting screeners on unfamiliar or technical topics. It also has the potential to tailor language to different target audiences, helping to create screeners that are clearer, more accessible and better aligned with how participants naturally speak and think. This can improve understanding, boost engagement and ultimately lead to higher-quality recruitment outcomes.
4. A co-creation tool
AI can serve as a set of building blocks, providing a useful starting point that saves time and gives researchers more capacity to focus on nuance, context and refinement.
The catch: What are we risking?
When we surveyed the AQR audience again and asked: What is your top concern when it comes to using AI to create qualitative recruitment screeners?
Here’s what they said:
- Inaccurate screening/missing key details (48%).
- Missing human nuance (30%).
- Ethical concerns (15%).
Let’s unpack that.
1. Inaccuracy and gaps
Large language models like ChatGPT don’t “understand” context; they generate statistically likely text based on training data. This can lead to screeners that sound plausible but contain factual errors, miss critical criteria or misinterpret the brief entirely.
OpenAI themselves have admitted that ChatGPT can “hallucinate,” a term used to describe when the model produces confident but false or nonsensical responses.
2. Missing human nuance
This is where market research professionals shine. We know how to ease participants into sensitive topics. We understand that building trust starts from the first question. And we’ve learned – often the hard way – what phrasing works, what doesn’t and how even small wording choices can affect recruitment quality.
AI tends to work in black and white. But our job is often in the grey areas, balancing clarity, tone, ethics and participant comfort.
3. Prompt dependency
AI output is only as good as the input. Without screener writing expertise, it’s difficult to create prompts that result in usable screeners. You need to know what to ask for, and how to tell if the result is any good.
4. Bias and cultural framing
Tools like ChatGPT were trained primarily on Western, English-language content. That means they may reflect certain assumptions or overlook important cultural or contextual nuances.
5. Ethical and legal concerns
What’s entered into an AI platform could be stored or used to train future models. If you’re inputting confidential client data or IP, this can raise data security concerns. As a rule of thumb: if in doubt, leave it out. We recommend creating a clear internal policy that outlines when and how AI tools can be used – and what information should never be entered.
The human expertise still matters
Even if AI could write a perfect screener (which it can’t), it wouldn’t replace the market research expertise needed to understand the brief, the audience and the client objectives.
Experience writing screeners isn’t just academic. It’s informed by:
- Countless hours speaking directly with participants.
- Learning what resonates (and what doesn’t).
- Spotting red flags in responses.
- Adjusting for tone, flow and readability.
- Knowing when a screener is technically correct but practically unworkable.
A well-written screener should:
- Reflect the client’s goals.
- Be adapted to the target audience.
- Support telephone validation (if used).
- Use the right tone for the medium (e-mail vs. phone).
- Include clear screen-outs, quotas and eligibility conditions.
The professional responsibility
As AI becomes more embedded in research processes, we all have a responsibility to ensure it’s used ethically and effectively.
- Follow MRS guidance on transparency, bias and participant rights.
- Train your team to understand the limitations of AI tools and ensure they’ve also learned the traditional way, writing screeners without AI to fully grasp the craft.
- Build in professional oversight: AI should support, not replace, expert review.
- Collaborate internally: A second pair of eyes can catch what the first missed.
And crucially: never lose sight of the participant experience. The goal is to make screeners that are clear, conversational, respectful of people’s time and aligned with project goals. That takes more than logic, it takes empathy.
AI is smart, but you’re smarter
We’ll say it loud and proud: AI is an exciting tool. We use it. We explore it. We see the potential. But it doesn’t replace human research professionals – it needs us.
It needs our:
- Critical thinking.
- Experience.
- Sensitivity.
- Industry knowledge.
- And perhaps most of all, our judgement.
AI can be your co-writer. But you should always be the editor-in-chief. Let’s not lose sight of the value we bring, not just in screener writing, but across the entire market research process: from surveys and guides to e-mails and recruitment. We’re not “just human checkers.” We’re trained professionals, and that matters more than ever.