Possibilities and risks

Thor Philogene is the CEO and co-founder of Stravito. He can be reached at thor@stravito.com.

“With great power comes great responsibility.” You don’t have to be a Marvel buff to recognize that quote, popularized by the Spider-Man franchise. And while the sentiment was originally in reference to superhuman speed, strength, agility and resilience, it’s a helpful one to keep in mind when discussing the rise of generative AI.

While the technology itself isn’t new, the launch of ChatGPT put it into the hands of millions, something that for many felt like gaining a superpower. But like all superpowers, what matters is what you use it for. Generative AI is no different. There is the potential for great, for good and for evil. 

Organizations now stand at a critical juncture to decide how they will use this technology. Ultimately, it’s about taking a balanced perspective – seeing the possibilities but also seeing the risks and approaching both with an open mind. 

In this article, we’ll explore both the possibilities and the risks of generative AI for insights teams and equip you with the knowledge you need to make the right decisions that will move your team forward.

A quick refresher on generative AI 

Generative AI refers to deep-learning algorithms that are able to produce new content based on data they’ve been trained on and a prompt. While traditional AI systems are made to recognize patterns and make predictions, generative AI can create new content like text, code, audio and images. 

The technology behind generative AI is called a large language model,1 which is a type of machine learning model that can perform a variety of natural language processing tasks like generating/classifying text, answering questions and translating text. 

How can generative AI enhance insights?

The insights industry is no stranger to change. The tools and methodologies available to insights professionals have evolved rapidly over the past few decades. At this stage, the extent and speed of the changes brought by increasingly accessible generative AI are something we can only speculate on. But there are certain foundations to have in place that will help insights teams figure out how to respond quickly as more information becomes available. 

Ultimately, it all comes back to asking the right questions and doing a thorough analysis – skills at which insights professionals are experts.

Getting insights faster 

One area we see a lot of potential is the summarization of information. For example, companies have already been using generative AI to create auto-summaries of individual reports, removing the need to manually write an original description for each report. 

We also see potential to develop this use case further with the ability to summarize large volumes of information to answer business questions quickly, in an easy-to-consume format. This could look like typing a question into a search bar. The generative AI platform would then leverage the company’s internal knowledge to present a succinct answer that links to additional sources.

For insights managers, this would mean being able to answer simple questions more quickly and it could also help handle much of the groundwork when digging into more complex problems. 

Democratizing your insights 

Generative AI technology could also help broaden the flow of insights throughout an organization. More specifically, key business stakeholders could easily access critical insights without needing to directly involve an insights manager. By removing barriers to access, generative AI could help support organizations that are on an insights democratization journey.

It could also help to alleviate common concerns associated with insights democratization, like business stakeholders asking the wrong questions. In this use case, business stakeholders without research backgrounds can be prompted to ask more relevant questions. 

Tailored communication for the right audiences 

Another opportunity that comes with generative AI is the ability to tailor communication to both internal and external audiences.

In an insights context, there are several potential applications. It could help make knowledge-sharing more impactful by personalizing insights communications for various business stakeholders. 

It could also be used to tailor briefs to research agencies as a way to streamline the research process and minimize the back-and-forth involved.

What are the drawbacks to generative AI for insights pros? 

As you’re likely aware, there are also many risks associated with generative AI in its current state, particularly for insights professionals. 

The information may not be trustworthy. One fundamental risk associated with generative AI is that you can’t fully trust the information it gives you, primarily due to its reliance on prompts. Generative AI is statistical, not analytical, so it works by predicting the most likely information to say next. If you give it the wrong prompt, you’re still likely to get a highly convincing answer. 

What becomes even trickier is the way it can blend correct information with incorrect information. In situations where million-dollar business decisions are being made, the information needs to be trustworthy. 

It’s also worth noting that ChatGPT is only trained on information through the end of 2021, which means that it won't take current events and trends into account.

Additionally, many questions surrounding consumer behavior are complex. While a question like “How did Millennials living in the U.S. respond to our most recent concept test?” might generate a clear-cut answer, deeper questions about human values or emotions often require a more nuanced perspective. Not all questions have a single right answer and when aiming to synthesize large sets of research reports, key details could fall between the cracks.

The sources aren’t always clear. Another key risk to pay attention to is a lack of transparency regarding how algorithms are trained. For example, ChatGPT cannot always tell you where it got its answers from and even when it can, those sources might be impossible to verify or nonexistent. 

And because AI algorithms, generative or otherwise, are trained by humans and existing information, they can be biased. This can lead to answers that are racist, sexist or otherwise offensive.2 For organizations looking to challenge biases in their decision-making and create a better world for consumers, this would be an instance of generative AI making work less productive.

There can be security risks. Common use cases for ChatGPT involve generating e-mails, meeting agendas or reports. But putting in the necessary details to generate those texts may leave sensitive company information at risk.

In fact, an analysis conducted by security firm Cyberhaven found that of 1.6 million knowledge workers across industries, 5.6% had tried ChatGPT at least once at work, and 2.3% had put confidential company data into ChatGPT.3 Companies like JP Morgan, Verizon, Accenture and Amazon4 have banned staff from using ChatGPT at work over security concerns. And just recently, Italy became the first Western country to ban ChatGPT while investigating privacy concerns,5 drawing attention from privacy regulators in other European countries.

For insights teams or anyone working with proprietary research and insights, it’s essential to be aware of the risks associated with inputting information into a tool like ChatGPT and to stay up to date on both your organization’s internal data security policies and the policies of providers like OpenAI.

What are the next steps?

Generative AI offers both intriguing opportunities and clear risks for businesses and there is still a lot that is unknown.

Insights leaders have the opportunity to show both their teams and organizations what responsible experimentation looks like. We’ve entered a new era of critical thinking, something that insights professionals are well-practiced in.

The path forward is to ask the right questions and maintain a healthy dose of skepticism without ignoring the future as it unfolds in front of you.


Organizations now stand at a critical juncture to decide how they will use this technology. Ultimately, it’s about taking a balanced perspective – seeing the possibilities but also seeing the risks and approaching both with an open mind. 


Make the tech your own 

A good place to start is by seeing the areas where you’re naturally drawn to using these tools. Before you invest in any solution, you want to make sure that generative AI will actually fit into your workflows.

Likewise, it’s a good idea to gauge how open you and your team are to incorporating these technologies. This will help you determine if there need to be more guardrails in place or, conversely, more encouragement to experiment responsibly. 

Sketch out the inefficiencies in your workflows and explore whether it’s something you could automate in whole or in part. These are likely areas where generative AI could offer your team a major productivity boost. 

Chances are you don’t have time for endless experimentation. A good way to focus your exploration is to look at the top priorities for your team and your organization and focus efforts where they will have the most impact.

Communication is key 

Be sure to clearly outline the risks for your function and organization and don’t hesitate to get advice from relevant experts in tech or security. Once the main risks are defined, you can align on the risk level you’re willing to tolerate. 

While minding the risks, also don’t be afraid to ask the good kind of what-if questions. If you see opportunities, be brave enough to share them. Now is the time to voice them. Likewise, listen to other parts of the organization to see what opportunities they’ve identified and see what you can learn from them.

It’s our firm belief that the future of insights will still need to combine human expertise with powerful technology. The most powerful technology in the world will be useless if no one actually wants to use it.

Therefore the focus for brands should be on responsible experimentation, to find the right problems to solve with the right tools, and not to simply implement technology for the sake of it. With great power comes great responsibility. Now is the time for brands to decide how they will use it. 

References

1 https://www.techopedia.com/definition/34948/large-language-model-llm

2 https://www.cbsnews.com/news/chatgpt-large-language-model-bias-60-minutes-2023-03-05/

3 https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt/

4 https://aibusiness.com/verticals/some-big-companies-banning-staff-use-of-chatgpt

5 https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/