Evaluating AI Framework – A Tool for GCs

Polsinelli
Contact

Polsinelli

Introduction

The interest and requests for guidance on artificial intelligence (“AI”) tools exploded in early 2023, with the publicity and public launch of powerful and easily applied generative AI tools. With the spotlight has come swift change. Never have we witnessed such a rapid adoption rate of a technology that has so many legal, business, technical, ethical, social and other considerations. But analyzing an AI tool does not have to be overwhelming. We have developed a framework for General Counsel (“GCs”) making “AI decisions” when presented with either a new tool or a new use case.

I. Evaluating AI Tools and Establishing a Method of Trust

Many aspects of AI (and particularly generative AI) are currently on unsettled ground, but GCs can break the practical analysis into five parts:

1: What is the tool?

When reviewing a novel tool the organization wants to use, the first question is whether it is even actually AI, using underlying machine learning. If so, what type of model is it? Is it part of the newly exploding wave of generative AI models? Or a predictive model that has been around and in use for well over a decade?

What are the license terms and conditions? Is it a public tool, open source, or an enterprise instance? What protections is the vendor providing if, for example, the organization receives an IP infringement claim arising from use or distribution of the output content generated by the tool?

2: What is the use case?

AI does not solve all problems, and not all problems need AI. Current AI tools and their models are, at their core, prediction machines—aids relying on mathematical statistics, probabilities, and correlations (not reasoning or certainty). The accuracy of that prediction can vary depending on multiple factors, as can the tolerance for, and type of, error within a use case. Some use cases also simply do not work well with AI because individual human judgment or empathy may be necessary. Consider also whether the tool is sufficiently transparent for the use case. Finally, use cases may be impacted by external factors such as legislation or industry trends. If companies use AI tools to process compliance activities, for example, overreliance presents legal risk to a company if there are errors in the results.

3. What is the data going into it?

Data privacy and confidentiality concerns should be top of mind for GCs when reviewing a new tool. What type of information will go through the tool? Public information? Trade secrets? Will the vendor have any rights to that information as training data? Do customers have their own enterprise instance of the tool, or is that data and/or feedback flowing through a public instance? Even with a public instance, is there any risk of sensitive or trade secret data circulating through the tool showing up in a future output or influencing another customer’s result?

4. What is the output?

The “power” of an AI’s output, or risk for overreliance, can also depend on its format. Does the tool produce a report to be further analyzed by humans, an answer or decision, or perhaps a binary “yes” or “no” with little to no transparency into the probability threshold? Risk may also depend on who receives that output. Is it inward facing, for reference or additional context, or for further review? Or is it outward-facing, a result given to customers that can potentially influence their choices, even if they lack expertise in the tool’s subject area?

5. Is it accurate?

Accuracy is and will be the most critical factor in analyzing any AI tool. The tool’s accuracy must meet or exceed applicable thresholds based on the application (otherwise, it could be more problematic than beneficial). Where an AI tool appears more accurate than not, the level of effort to check results degrades. To prevent blind trust, accuracy in AI results must not be presumed; rather, there should always be a “trust but verify” mentality that confirms accuracy and reinforces the users’ understanding of the AI tool and the potential errors that may arise in use.

II. Conclusion

When considering the nascent regulatory field for use of generative AI in business and the potential pitfalls of AI use—this framework for AI assessment reduces risk and error. A comprehensive, balanced approach will be needed as AI technology, regulations and industry-specific considerations continue to evolve.

Originally Published in the Houston Medical Times.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Polsinelli | Attorney Advertising

Written by:

Polsinelli
Contact
more
less

Polsinelli on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide