How In-House Counsel Can Prepare for Generative AI Risks

Integreon
Contact

Generative artificial intelligence legal applications have arrived. They include everything from automated contract drafting to effective on-demand clause-level search. And more are coming to the market every day.

But are in-house legal teams ready for them? Have they thought through all the security and legal implications?

Typically, before companies acquire new legal-focused technology, they must ensure that it meets data privacy and security requirements. In-house legal teams often rely on their information security departments to screen new applications for the usual litany of concerns, including cybersecurity weaknesses.

These threshold considerations have become so standardized that few give them much thought.

In a typical request for proposal, vendors must answer questions about the host location of their application, how they transfer data, whether they run vulnerability tests, and if they have industry cybersecurity certifications like SOC 2 and ISO 9000.

These data points all remain important considerations, but they do not go far enough with generative AI.

New Data Security Questions

Generative AI tools are supported by large language models. These models retain, learn from and reuse the data fed into them.

Inside many companies, that learning process will raise concerns about whether their data is directly or indirectly helping other companies, including their competitors.

In this context, companies should wonder whether they can share their data with an AI application, but prevent outsiders from benefiting from that data.

But that worry will also raise the question of whether the AI can continue to improve if the user companies do not share data.

The balancing of these two interests must be understood by companies embarking on generative AI application use.

In-house legal teams need to start probing these issues with their technology vendors now. Those vendors that employ large language models should be able to assure customers they do not retain customer data or use it to train their models.

In addition to asking about data retention, in-house lawyers or their information security teams should ask vendors the following questions:

  • Can the vendor describe exactly what data is shared with the large language models to produce the responses provided in the tool? Is it encrypted at rest and in transfer? How long is it retained? Is it identifiable to your organization?
  • Has the vendor put in place automatic data deletion features? Knowing if the system requires your company to actively delete its content to avoid its inadvertent use is important. Also, can your company remove all traces of its content should it discontinue the application?
  • Are the prompts or questions being sent to the large language model based on generic data models that the vendor has created, or are they relying on customer users to frame questions, potentially exposing confidential data to the platform? Understand if you will need to educate your user community on effective query of the AI, or if the tool’s operation will take care of this concern by presenting structured output and controlling queries behind the scenes.
  • If the tool offers market comparison information, is the participation in feeding that market opt-in or automatic?
  • Does the tool provide ways for your company’s users to benefit from market information without sharing data, or does it provide a means to share some level of data without compromising confidentiality?

In-house legal teams should be partnering with their information security teams to develop these screening questions when generative AI applications are being explored to be sure applications are fit for their environments and to address the additional privacy and data security concerns these capabilities raise.

Vetting Existing Applications

But what about existing applications? Many, if not most, existing legal software applications, as well as many finance, human resources and other business applications, will be employing generative AI capabilities.

Overall, these applications bring much-needed innovation to routine work and will likely mean productivity gains across many organizations.

But generative AI functions will likely be introduced without any information security review. In most organizations, applications already in use do not receive information security screening once they are deployed and operating without incident.

Now is a good time for in-house teams to ask existing vendors if generative AI features are being planned for their upcoming software releases. If the answer is yes, then it is important for legal to team up with information security departments to ask additional questions on data use and security before permitting the installation of generative AI capabilities.

Risks Prompted by Uncontrolled Access

With great power comes great responsibility. That old expression applies directly to the use of generative AI technology, and the faster in-house legal teams recognize this new reality, the better off their companies will be.

Take generative AI contract drafting applications. Before their advent, companies could constrain contract drafting through clause libraries and online playbooks.

Generative AI applications, on the other hand, enable the drafting of contracts and other legal content guided by the interpretations of the tool, which poses potential legal risks.

An AI tool that offers drafting options based on the broad training a market-facing large language model can produce may suggest clause alternatives that go beyond a company’s risk appetite or are inconsistent with other terms in the agreement if not compared holistically.

A new or junior person drafting a contract with unvetted AI guidance or without oversight could expose the company to unacceptable and costly terms, and even undermine established customer or supplier relationships with inappropriate guidance.

In-house counsel teams need to review who will be permitted to access this type of machine guidance. They also need to utilize tools that provide guardrails in the form of online enforced playbooks to ensure that undue reliance on unvetted machine suggestions is not permitted.

In other departments, like HR, more significant issues exist. Tools that might influence hiring decisions, promotional opportunities, or disciplinary actions are subject to bias and privacy considerations.

An excited HR department might see automated screening or similar features being added to an existing application as a productivity boon, but it is up to legal and their information security partners to raise potential red flags before deployments are launched.

Some companies have recently announced policies banning consumer-level AI and prohibitions against putting company data into generative AI applications. While these steps are appropriate, they are insufficient to reduce risks and assure data security, privacy and responsible usage.

Stepping up to review all applications where generative AI is being considered and getting ahead of the usage parameters and vendor capabilities is critical.

Generative AI is unlike any tool used before. We must take this moment now to learn about its use and implement ways to safeguard against the new risk it brings.

And we only have a moment. The generative AI wave is upon us. Its ease of use and power to fuel tremendous productivity benefit means — with or without safeguards — it will be employed, and companies cannot ignore it.

Published by Law 360 Pulse

Written by:

Integreon
Contact
more
less

Integreon on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide