With great power comes great responsibility. That old expression applies directly to the use of generative AI technology, and the faster in-house legal teams recognize this new reality, the better off their companies will be.
Take generative AI contract drafting applications. Before their advent, companies could constrain contract drafting through clause libraries and online playbooks.
Generative AI applications, on the other hand, enable the drafting of contracts and other legal content guided by the interpretations of the tool, which poses potential legal risks.
An AI tool that offers drafting options based on the broad training a market-facing large language model can produce may suggest clause alternatives that go beyond a company’s risk appetite or are inconsistent with other terms in the agreement if not compared holistically.
A new or junior person drafting a contract with unvetted AI guidance or without oversight could expose the company to unacceptable and costly terms, and even undermine established customer or supplier relationships with inappropriate guidance.
In-house counsel teams need to review who will be permitted to access this type of machine guidance. They also need to utilize tools that provide guardrails in the form of online enforced playbooks to ensure that undue reliance on unvetted machine suggestions is not permitted.
In other departments, like HR, more significant issues exist. Tools that might influence hiring decisions, promotional opportunities, or disciplinary actions are subject to bias and privacy considerations.
An excited HR department might see automated screening or similar features being added to an existing application as a productivity boon, but it is up to legal and their information security partners to raise potential red flags before deployments are launched.
Some companies have recently announced policies banning consumer-level AI and prohibitions against putting company data into generative AI applications. While these steps are appropriate, they are insufficient to reduce risks and assure data security, privacy and responsible usage.
Stepping up to review all applications where generative AI is being considered and getting ahead of the usage parameters and vendor capabilities is critical.
Generative AI is unlike any tool used before. We must take this moment now to learn about its use and implement ways to safeguard against the new risk it brings.
And we only have a moment. The generative AI wave is upon us. Its ease of use and power to fuel tremendous productivity benefit means — with or without safeguards — it will be employed, and companies cannot ignore it.