Thor Philogene is the CEO and co-founder of Stravito. He can be reached at thor@stravito.com.

“With great power comes great responsibility.” You don’t have to be a Marvel buff to recognize that quote, popularized by the Spider-Man franchise. And while the sentiment was originally in reference to superhuman speed, strength, agility and resilience, it’s a helpful one to keep in mind when discussing the rise of generative AI.

While the technology itself isn’t new, the launch of ChatGPT put it into the hands of millions, something that for many felt like gaining a superpower. But like all superpowers, what matters is what you use it for. Generative AI is no different. There is the potential for great, for good and for evil. 

Organizations now stand at a critical juncture to decide how they will use this technology. Ultimately, it’s about taking a balanced perspective – seeing the possibilities but also seeing the risks and approaching both with an open mind. 

In this article, we’ll explore both the possibilities and the risks of generative AI for insights teams and equip you with the knowledge you need to make the right decisions that will move your team forward.

Generative AI refers to deep-learning algorithms that are able to produce new content based on data they’ve been trained on and a prompt. While traditional AI systems are made to recognize patterns and make predictions, generative AI can create new content like text, code, audio and images. 

The technology behind generative AI is called a large language model,1 which is a type of machine learning model that can perform a variety of natural language processing tasks like generating/classifying text, answering questions and translating text. 

The insights industry is no stranger to change. The tools and methodologies available to insights professionals have evolved rapidly over the past few decades. At this stage, the exte...