To Manage Generative AI Risk, Understand the Terms of Use

Goodwin
Contact

Goodwin

Most systems do not protect sensitive information used in prompts, and users bear most of the risk of using generative AI systems and outputs.

Generative AI systems use the information provided in prompts — along with the data that the systems were trained on — to create outputs. Each system has its own rules for how that information is protected, whether it can be used for training, and how outputs can be used by users and the system itself. Many, if not all, of these rules are contained in the system’s terms of use.

Companies should pay close attention to the terms of use of any generative AI system they use to ensure that they understand their rights and can protect their privacy.

Inputs usually are not protected

The terms of use typically state that information provided in prompts or inputs is not considered confidential and that the system will use inputs for training in the future. Some AI companies let users chose to have their inputs excluded from training, but that does not mean that inputs will be treated confidentially. In most cases, inputs will not be adequately protected, and users should not enter confidential or sensitive information as prompts, including personal information or proprietary intellectual property such as software source code.

Users own outputs (in most cases) but IP protections are unclear

Terms of use often state that the user owns the outputs resulting from prompts they enter into the generative AI system. This may seem desirable, but users should consider some nuances to understand the implications of this arrangement.

Even if the terms of use indicate that the user owns the output, ownership of IP rights for generative AI outputs is not clear under existing copyright and patent law, and enforcing IP rights may be difficult or impossible. For more detail, see “How do intellectual property rights apply to generative AI outputs?”

Moreover, the terms of use might still include restrictions on how outputs can be used, particularly on uses that facilitate competition with the AI system that generated the output. Users should check in advance to be sure that their use case is allowed by the system’s terms of use.

In some cases, terms of use may indicate that the generative AI company has an express right to use outputs, even when the user owns them. Some even stipulate that the system owns the output, although this is still not very common. In such cases, it is important to understand how the user is permitted to use outputs and whether the permitted uses are adequate.

Users bear most of the risk of using systems and outputs

Most terms of use stipulate that users are responsible for all risks associated with using the generative AI system, including the outputs, with a total disclaimer of all representations and warranties. In such cases, no indemnity or other protection is offered by the API provider.

Indeed, standard terms of use typically include aggressive representations and warranties, and other protections, that protect the generative AI company, including an indemnification against any infringement claims arising from the user’s input and the tool’s output. In most cases, the user owns all risk related to the output of the AI tool, so it is critical for users to conduct thorough due diligence on any output before using it.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Goodwin | Attorney Advertising

Written by:

Goodwin
Contact
more
less

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide