Twenty-Three State AGs Urge NTIA to Prioritize AI Regulation

Troutman Pepper

[co-author: Stephanie Kozol]*

On June 12, a bipartisan group of 23 attorneys general wrote a letter to the chief counsel for the National Telecommunications and Information Administration (NTIA), recommending a risk-based approach to a regulatory framework for using and deploying AI technology. Driven by their “extensive experience enforcing data privacy and consumer protection laws,” the AGs noted that states, such as Colorado, California, Connecticut, Tennessee, and Virginia, already regulated AI through their respective state data protection and privacy laws.

The risk-based framework recognized that some AI uses require greater oversight than others, while defining “risk” by looking at multiple factors, including:

  • Categories of impact (e.g., individual physical or psychological safety, civil and human rights, or equal access to goods or services);
  • Types of data used (e.g., medical information, biometric data, or personal information of children); and
  • Whether the automated decision-making impacts an individual’s financial or legal situation.

The framework would leverage resources across public and private sectors to responsibly develop and deploy AI in a trusted and fair manner so as not to discourage innovation.

Further, the AGs proposed foundational principles to develop the risk-based framework. Specifically, the AGs endorsed independent standards for transparency, including: (1) testing, (2) assessments, and (3) audits of AI solutions. Like the Energy Star program, NTIA or NIST (or a partnership between the two) would establish independent standards, so trusted auditors could certify an AI system and build public trust. For high-risk AI solutions, the AGs proposed mandatory external third-party audits that would occur periodically. Unsurprisingly, state AGs also reccommended concurrent enforcement authority under any federal regulatory regime to “enable more effective enforcement to redress possible harms.”

Why It Matters

2023 has been the year of AI. With private companies adopting AI solutions to perform many business functions and with private access to AI engines like ChatGPT, AI is ripe for regulatory oversight. And as evidenced by this letter, state AGs are eager to begin regulating the young industry. Companies that build and deploy AI solutions should keep abreast of regulatory developments to anticipate future regulatory requirements. Failure to do so could result in onerous compliance obligations that may necessitate pulling AI systems offline, so they can be rebuilt to comply with applicable requirements.

*Senior Government Relations Manager

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Troutman Pepper | Attorney Advertising

Written by:

Troutman Pepper
Contact
more
less

Troutman Pepper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide