The recent launch of the AI Alliance, a coalition of more than 50 corporations and research institutions engaged in artificial intelligence (AI) development (including AMD, CERN, Cornell University, Dell Technologies, IBM, Intel, Linux Foundation, Meta, NASA, Oracle, ServiceNow and Sony Group), aims to achieve the following objectives:
The AI Alliance has both commercial motivations as well as (I hope) a collective voice loud enough to build understanding and trust in the technology, preventing or at least mitigating the potentially negative impact of ill-informed regulation from either the United States or the European Union. Moreover, all the discussion of AI as a shiny new object misses the reality that the technology is already at work delivering practical and safe results that improve productivity and performance.
One motivating factor behind the creation of this group is a competitive concern that the tie-in between Microsoft and OpenAI, along with the latter’s expected pivot to more of a commercial endeavor, could make generative AI technology more proprietary and less open. Few people under the age of 50 can recall a time when computing systems were closed, based on proprietary standards that enforced vendor lock-in for hardware and software. Until the early 1990s, there was little room for third parties to develop software or create peripheral devices that worked with that anchor system unless they were granted permission by that vendor and paid extortionate fees. Then, technology buyers rebelled, and systems that were far more open (but not completely open) became the norm. This was followed by the launch of the Open Source Initiative (OSI) in 1998, which marked a key beginning for open source software, a construct where the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose, often in a public, collaborative fashion.
Beyond the objective of spurring competition, one of the most consequential objectives of the AI Alliance is developing tools and methods for safety, security and trust that are essential for the further rapid advancement and adoption of AI and generative AI. This work is necessary because of what I see as an overblown reaction to the purported dangers of generative AI and AI generally. Public awareness of AI was permanently altered a year ago when ChatGPT went viral and became a thing that people could touch and use. Having no background in work that had been underway for quite some time, the general public and politicians were susceptible to fear of the unknown. Having a large, diverse, self-interested body dedicated to furthering the use of AI can be a useful counterweight to those looking to serve their own interests by sowing fear and doubt about an innovative technology. This is especially important for users of business computing, who have a great deal to gain from the rapid expansion of AI-enabled features in the software they use to run their organizations.
Those seeing grave danger in AI and generative AI miss its potential to reduce the time spent on vast quantities of simple and seemingly inconsequential activities that currently sap
The work that the AI Alliance proposes to do complements work that’s already been done by the National Institute of Standards and Technology (NIST) which developed an AI Risk Management Framework that was released in early 2023, along with a video explaining the approach. The framework is general enough to be applicable to most situations and use cases. It focuses on how developers of the technology should approach the risks associated with harm to people, organizations and the ecosystems — for example risk to interdependent social structures such as financial institutions when individual actions could create a software-driven cascade of negative outcomes.
Consistent with all risk management in complex systems and environments, there is a significant challenge in developing accurate and consequential methods of defining and measuring risks. Given the immaturity of the technology, there are plenty of unknowns that will require management, but for the vast number of use cases now contemplated, there seem to be very few unknown unknowns. Beyond that, individual organizations have their own tolerance for risk and prioritize risks differently. The process of integrating AI risk management also needs to be defined in ways that achieve each organization’s objectives, following general guidelines.
At its core, the ultimate objective of applying risk management to AI is to ensure that the results of any AI system are valid and reliable as well as accountable and transparent. The last two mean that systems are not designed in a way to be a black box immune to inspection and therefore understanding. The outcome of using AI must be readily verifiable so that results are explainable and interpretable. Like any enterprise software, AI systems must be designed to be secure and resilient to avoid nefarious modifications and, especially where AI is part of a core system, able to rapidly recover from shocks of all kinds. The systems also must respect individual privacy and, as much as possible, limit bias in their creation. On that last point, where there’s transparency, it’s likely that a dispassionate application of machine learning (ML) is likely to be less biased than cases in which humans are involved. At the same time, there also is a danger that — consistent with human nature — AI systems will be accused of bias by those who find some objectively determined result unflattering or inconvenient.
What’s often overlooked in current discussions about AI and generative AI is that both are already being applied to day-to-day activities in enterprises. A quick survey of currently available vendor offerings reveals a range of AI-enabled capabilities including:
These items aren’t especially flashy, and they might not seem consequential because they address productivity and effectiveness issues at an atomic level. However, because each of these and other seemingly inconsequential improvements are multiplied by the tens of millions every day, their collective impact on commerce and the economy will be substantial. And all these examples use technology in a way that can be easily explained and verified.
Risk management for AI and generative AI capabilities will be a key requirement for all business software vendors, so I recommend that they have a robust, customer-centric approach in place as they introduce features and capabilities. This will entail having:
Buyers must also have internal AI risk management systems, processes and culture in place to ensure that they can take maximum advantage of the technology as quickly as possible.
Regards,
Robert Kugel