Editor’s notes: Marcelo Bursztein is CEO and founder, NovaceneAI, a software development company located in Ottawa, Canada. This is an edited version of an article that originally appeared under the title, “Quirk's New York Recap: Embracing Human-Centricity in the Age of AI.”

When I attended the Quirk’s Event - New York this year, I tried to attend all the keynote presentations that had AI in their title. 

It was nearly impossible. There were so many that they overlapped. Still, I walked away from this conference with new insights into AI including its capabilities, how to protect data and where humans fit into this rapidly evolving world.

Most of the presentations I attended were about generative AI and, more specifically, about ChatGPT. These seminars covered basic explanations of what the technology is, how to write effective prompts, its benefits such as helping researchers develop questionnaires and the various threats it poses. 

On the topic of threats, data quality was one of the most prevalent concerns – citing chatbots impersonating real participants and compromising data validity. This data validity is crucial to a project: when there’s untrustworthy data, the project is dead on arrival. 

These presentations provided practical strategies to counter the risks, but audiences seemed focused on a slightly different set of questions. For example:

There is a possible explanation for the disconnect between the power of recent AI advancements and experts’ ability to find clear ways to take advantage of it.

AI is fundamentally a technical discipline, spearheaded by data scientists and software engineers. While it is easy to demonstrate how a chatbot is so fantastically powerful, demonstrating how AI can fit into a larger workflow isn’t as straightforward.

The metaphor of a solution looking for a problem comes to mind. While AI has shown incredible capability, the abili...