ISG Software Research Analyst Perspectives

With Generative AI, Any Dog Can Write a Contract

Written by Robert Kugel | Aug 29, 2023 10:00:00 AM

This title plays on the now-ancient meme from the 1990s: “On the internet, nobody knows you’re a dog,” which pointed to a challenge of anonymity posed by new technology. In this case, though, I’m using it to highlight an opportunity that generative artificial intelligence presents in streamlining routine business functions that require some level of individual skill and experience to handle. Ordinary contracts are just one example of work products that require humans to create, edit, analyze, evaluate and act on. Similarly, but in a different sphere, physicians are excited by the prospect of having patient visit summaries managed by intelligent systems rather than completing this tedious administrative work themselves.

AI and generative AI have garnered enormous attention this year, provoking alarmists to weigh in on potential catastrophic and even dystopian outcomes. There are perils aplenty ahead with this technology, and there are bound to be well-publicized failures. Yet, an informed risk/benefit analysis would conclude that these efforts will have an overwhelmingly positive impact, resulting in rapid adoption of software that uses AI to enhance productivity. Ventana Research asserts that by 2027, more than 80% of organizations will use generative AI to create contracts and other complex documents and analyze them to find omissions and vulnerabilities.

One of several major failings of the alarmist approach to generative AI is a fixation on grandiose examples, which will probably be handled by humans for years to come. Those seeing grave danger in AI ignore the vast quantity of relatively simple and inconsequential activities that currently sap productivity, raise costs and prevent individuals from having the time to focus attention on more difficult issues that require training, skill and experience. Take business contracts: Somewhere between the formulaic sales orders that customer relationship management or enterprise resource planning systems already spit out programmatically and complex and financially consequential contracts hammered out through negotiation, there are customized but relatively straightforward commercial agreements. These almost always require inside and even outside legal counsel to write, edit, review and approve. In these cases, it will be possible for an appropriately trained system to generate first drafts and then have another system review and highlight that draft for vulnerabilities. Generative AI would not relieve individuals from due diligence responsibilities in reviewing contracts, but theirs would be fresher eyes at work. And the same technology can reduce risk, using an appropriately trained challenger system to find holes and vulnerabilities in the contract language. Similarly, those receiving a proposed contract would have a trained assistant pointing to omissions and unfavorable terms on their side.

Another alarmist failing is assuming that winner-take-all dynamics common in some parts of the information economy will result in a handful of general large language models to handle everything. Yet, it’s almost certain that that won’t be the case. Beyond general-purpose LLMs, vendors will create highly specialized applications for specific domains and use cases. Individual organizations will train these models on company data sets to ensure the results conform to business requirements, business practices and lingo.

Recently, there’s been excitement over tools designed to help physicians manage the paperwork required with patient interactions. Microsoft joined with Nuance in 2021 to provide a system for creating “ambient clinical documentation,” which at the time was hailed as a breakthrough, but in retrospect, only does part of the task because it mainly transcribes the conversation. Generative AI makes it possible to do a more effective job of summarizing the visit and crafting an after-visit summary with recommendations. As with contracts, health care professionals will need to review the result and make corrections. And allowing a physician or nurse to focus more attention on the patient is likely to result in better outcomes. Those alarmists worried about the safety of applying AI to patient care should consider how much more effective care can be when it is bolstered by an ability to draw on data as well as intuition. Substantially reducing the time a practitioner must spend on administrative work – now the bane of corporate medicine – can improve job satisfaction.

Skeptics of AI’s ability to improve health outcomes are likely to remind us that IBM Watson Health was created for this purpose in the ‘teens. This well-funded effort foundered on the technology limitations of its day and the boil-the-ocean nature of its mission. Much of Watson Health was spun out and is now Merative, which is working on a set of more focused opportunities to apply AI to life sciences and health care. This example is another lesson of real-world AI: Initially, advances are more likely to be realized on bite-sized efforts rather than headline-grabbing broad visions.

Another bugaboo of AI alarmists is the imagined impact when one allows systems to make all decisions, especially those with major consequences. A more likely future is one where small, inconsequential decisions – especially when as part of a sequence of repetitive actions requiring some examination of conditions and potential consequences – are performed automatically while decisions with significant complexity or potential for a severely negative outcome are made by humans. Between these two extremes will be a broadening of decisions that can be left entirely up to a system or automated only when ambiguity in an outcome or the degree of risk is below a desired level. With machine learning, the threshold of what is performed automatically will broaden over time with experience.

Moreover, rather than seeing a future of monolithic AI and generative AI systems, practitioners will likely take a robot versus robot approach ‒ having one system to write and another to review, as in my contract example. In the world of finance, some regulators worry that stability will be imperiled if AI drives herding behavior that leads to panic and crashes. Leaving aside the ability of humans to do this on their own without the assistance of cutting-edge technology (meme stocks!), not to mention the government’s contribution to creating the conditions for past investment bubbles and panics, this concern ignores another feature of financial markets: contrarians and hedge funds ready and able to bet against the crowd. Moreover, it's predicated on the assumption that there will only be one or a handful of highly similar systems used by investors, an assumption that flies in the face of reality. Scads of independent algorithmic trading systems are already used today to find opportunities to trade against someone else's algorithmic system.

There is tremendous potential for the various forms of AI that are making their way into the market. At the same time, it’s essential to understand the limitations and pitfalls of AI. To be competitive, organizations must have the people and willingness to assess how to utilize AI in every part of the business system. This is a tone-at-the-top issue, so I recommend that leadership teams have the resources to quickly adopt AI capabilities in a fast-follower mode.

Regards,

Robert Kugel