Building Compliance Programs for AI Tools

Epstein Becker & Green
Contact

Epstein Becker & Green

Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.

But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools and what should companies do to continue to feel confident about compliance while the AI tools simplify and hopefully improve processes? The improper application of AI tools or the improper operation or outcomes from the AI tools can create new types of enterprise risks. While the use of AI in health care presents many opportunities, the enterprise risks that might arise need to be effectively assessed and managed.

But How?

Traditionally, to manage enterprise risk and develop their compliance programs, health care companies have relied upon the multitude of guidance that has been published by the Office of Inspector General of Health and Human Services (“OIG”) and by industry associations such as Health Care Compliance Association and other federal, state and industry-specific guidance. Specific compliance related guidance focused on the use of AI tools in health care is lacking at this time, however, the National Defense Authorization Act (NDAA), which became law on January 1, 2021, includes the most significant U.S. legislation concerning AI to date, The National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA mandates establishment of various governance bodies, in particular, the National Artificial Intelligence Advisory Committee, which will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights.

In the absence of specific guidance, companies can look to existing compliance program frameworks, e.g., the seven elements constituting an effective compliance program as identified by OIG, to develop a reliable and defensible compliance infrastructure. While we can lean on this existing framework as a guide, additional consideration needs to be devoted to developing an AI compliance program that is specific and customized to the particular AI solution at hand.

What policies will govern human conduct in the use and monitoring of the AI tool? Who has the authority to launch the use of the AI tool? Who has the authority to recall the AI tool? What would be the back-up service if needed? Written policies and procedures can help.

[View source.]

Written by:

Epstein Becker & Green
Contact
more
less

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide