Editor’s note: John Friberg is vice president at Healthcare Research Worldwide (HRW), a global health care research agency.
Among the many use cases proposed at conferences and in articles by industry personnel over the past year, one such idea has become particularly disruptive – that we will be able use ChatGPT and other such open AI platforms as a source of data collection, rather than just a tool to aid in the collection process.
In other words, some within the marketing research industry suggest that ChatGPT will soon serve as a well-informed, fully qualified market research respondent, able to talk to our moderators about an endless list of research topics.
How realistic is this proposition? Will there be some types of market research where AI can replace respondents and other types where it cannot?
There currently exists a topic “ceiling,” beyond which many of us are no longer comfortable relying on the advice of a generative AI platform.
Try asking yourself the following question: What wouldn’t I take ChatGPT’s sole word on?
That answer often includes examples related to the health and well-being of ourselves and those we love. For these decisions, we usually want to consult with a fellow human, preferably one with training and experience in a relevant field of expertise.
It’s likely that, as more studies are conducted and the results are publicized, trust in AI as a source of health care-related information that we would normally get from our doctor will increase, and our AI “ceiling” of comfort will rise.
For now, many patients feel that the health care information being provided by AI platforms does not meet the standard of reliability necessary in real, clinical practice.
According to a recent study funded by the National Institute of Health and conducted by the University of California, San Francisco, researchers found that respondents were still ...