Ask the tough questions

Rob Key is founder and CEO of Converseon. He can be reached at rkey@converseon.com.

The surge in adoption of AI technologies across organizations and geographies has given rise to an equally urgent and concurrent sprint by relevant governing bodies to establish critical frameworks and processes to help ensure their safe and effective use. While positive and important steps in the right direction, these initiatives are admittedly overdue given that AI technologies are not new.

While current concerns about AI are initially focused on areas with privacy-related questions (such as policing, social credit systems, facial recognition and regulated industries), the social, VoC and media intelligence space will not remain immune for long.

While most use cases in the space are currently considered low-risk from a regulatory perspective, there is significant risk to the brand itself in making business decisions based on poor-quality data and analysis.

The core, fundamental goal of regulatory efforts is to eliminate AI bias, increase accuracy and build trust in the systems. This means higher data standards, greater transparency and documentation of AI systems, measurement and auditing of its functions (and model performance) and enabling human oversight and ongoing monitoring. Indeed, even without forced regulation, these processes represent important and critical best practices that warrant immediate adoption. In the near future, it is likely that almost every leading organization will have a form of AI policy in place that will adhere closely to these standards.

Yet these standards represent areas where by and large the vast majority are currently falling dangerously short on technology and process. Poor-quality sentiment and opaque systems, for example, have created skepticism about the resulting data and insights and helped contribute to an overall trust deficit that has stifled some important adoption.

Aligning with these standards will help reverse these perceptions. But this will require all stakeholders to substantially elevate their technology, requirements, demands and systems. It’s a process that needs to begin now, before these requirements kick in, because doing so will reap clear and immediate benefits. These include mitigated enterprise and consumer risk, alignment with emerging global standards as well as generating substantial improvement in accuracy, adoption and impact. 

Perhaps most importantly it will help engender more trust among key stakeholders not just with the AI technology itself but also for all the solutions and products that leverage the technology.

While the U.S. has recently announced agreement on principles among many leading AI organizations for self-regulation, the EU’s AI Act is the first major proposed AI law and generally represents an evolved and thoughtful approach. Its principles are representative of effort elsewhere and are likely a harbinger of what’s to come globally.

The Act categorizes use cases for AI technology from unacceptable- to high- to low-risk and requires corresponding, specific actions that range from stringent to nominal. High-risk areas include employment, transportation and more. Social and media analysis and market research, depending on how the data is used, is considered largely low-risk at this point. However, this is just a starting point and we believe it’s likely that, over time, the standards applied to high-risk today will eventually migrate to other, lower risk use cases.

The crux of the Act requires strong data governance; accurate training; the elimination of potential bias within models; clear and transparent model performance validation; tracking and auditing and the ability for human-in-the-loop intervention if the models go off the rails. Model-training has an important and prominent focus. It states:

“High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High-quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system…”

The Act notes that taking these steps to build trust in these systems is simply essential. Untrusted AI is doomed to failure.

Let’s contrast these standards and requirements with the application of AI to current social, voice-of-customer and media analysis.

While the application of some level of AI is pervasive across most social, media and CX platforms, it is highly uneven. According to the Social Intelligence Lab’s 2023 State of Social Listening study, data accuracy remains one of the industry’s biggest complaints. In many systems there is negligible human-in-loop oversight or ability to fine-tune or modify models. One-size-fits-all model classification generally takes precedence over more accurate, domain-specific ones.

Model performance and auditing is mostly opaque or one-off – if available at all (in most cases it is not). Further, the training processes and data used are most often black-box and mostly unavailable to users of the technology (eliminating bias in model training can be a complex task that requires sophisticated end-to-end processes).

If asked, most users simply do not know the specific performance of their models and accuracy of their data classification, yet they often make business decisions based on this data. If probed on specifics, many providers of AI gloss over details of capabilities. Unclear marketing, promotional materials and other documentation often just muddy the picture. This state of affairs is simply unsustainable in this new environment.

To their credit, organizations like the Institute of Public Relations, ESOMAR and the Association of Media Evaluators are working to educate and generate some consensus for action but those efforts remain mostly in the early stage and aspirational. Importantly, an increasing number of analytics- and technology-savvy brands are demanding greater visibility and transparency – features of trusted AI – which is a key impetus for change. Without pressure from buyers, many technology providers simply will not prioritize the development of new key “trusted AI” features.

Here are some key questions and topics we recommend for consideration when evaluating AI vendors, drafting RFPs or participating in relevant industry groups:

  • Conduct a current assessment. Does your team understand this technology well enough to effectively evaluate it and establish the right processes? Do you need to improve education, especially among key stakeholders? Are you asking vendors the right questions?
    Who are your current vendors and what is the state and quality of their trusted AI technologies and processes, if any? If none, what is their roadmap? How are your data and insights being used from social and media analysis? How does that align to high- and low-risk categories? Where could the combination of trusted AI and unstructured data provide your organization with even greater value?
    How are models trained? Is it in-house or via a third party? What specific roadmap and strategy do your vendors have to align and elevate their offerings to these standards? Are they capable of working with third-party audit and trusted AI platforms? How do they conform to key trademark and privacy requirements? What is their timeline for action?
    What process is used to eliminate potential bias? Are there robust data discovery capabilities? Is the model training conducted by third parties or domain experts? Are there intercoder reliability processes? How do you ensure the highest data quality? How are models scored and evaluated? Can you access domain- or industry-specific models? And can your team participate in the fine-tuning or are you stuck with a static, one-size-fits-all model that doesn’t meet your requirements?
  • How accurate is your model? How can you know and verify this? Can you access and audit the training data models directly and see the precise performance of your model at any point in time? Is the model evaluation process comprehensive? Does it incorporate standard measures (F1, precision and recall) or more? How often is model performance assessed? And is there an available audit trail of the model performance over time? Do you have a data drift detection technology providing you with advance warning that models might need to be retrained and updated? Is model performance tracked and registered or is it “train it and forget it”?
  • Is there a model governance system? Can you or your organization provide input or changes to the system? Can you track and see the performance of all your models across the organization in near real time? Is there an end-to-end system to build, fine-tune, integrate, validate and deploy models efficiently? How does it work and how is it accessed? Is there a process for feedback and model optimization?
    Is there a human-in-the-loop capability for oversight and intervention? How does it work? Can you have direct access? And if models do go off course, what processes are in place to help explain why and determine what corrective action to take? For many use cases, the AI Act demands that transparency must be built in so that users can interpret the system’s output (and challenge it if necessary).

Specifics matter

These questions represent only a partial (but important) list of topics requiring critical discovery and areas of investigation that deserve in-depth, detailed responses. When it comes to AI approaches, specifics matter.

The time for this effort is now. Though initial legislation is not focused squarely on this category, the industry should still take aggressive steps to abide by the standards. The growth in importance and impact for insights derived from unstructured data requires the most trusted AI. And as trust is gained, the insights and solutions will continue to expand across essential areas ranging from corporate sustainability efforts to product innovation, brand reputation and customer experience.

Now is also an ideal time to get involved more with your industry groups for education, consensus development and representing the category in front of key regulators, academics and other influentials.

Moreover, taking specific actions now will not only front-end potential risk and help ensure compliance to internal AI alignment and policies; it will also generate significant positive impact in model effectiveness, leading to broader adoption and even predictive and prescriptive analytics that will better serve your organization and its key stakeholders. Finally, of course, challenging your own internal capabilities and the industry at large is indeed the critical leverage point required to level up capabilities in a manner that safeguards consumers and helps assure trusted implementations of these essential technologies.

Clearly, the payoffs of being a leader – and not a laggard – in trusted AI are simply too important for the industry at large to wait any longer.