Taking a Byte from the Regulatory Apple: States are Introducing Their Own AI Regulations

ArentFox Schiff
Contact

ArentFox Schiff

As the federal government grapples with the complexities of comprehensive artificial intelligence (AI) regulation and competing agendas, several US states are taking matters into their own hands by computing their own solutions to the challenges posed by the rapid advancement of AI.

AI has become an inevitable part of everyday life and everyday business. The giga-presence of AI in multiple services, products, and industries will expand further as the public and private sectors continue to analyze the benefits (and risks) of deploying AI.

While there have been some efforts to prompt federal oversight of AI, such as the October 2023 Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, the federal government has not yet enacted applicable federal law or established an overarching regulatory scheme. As a result, AI regulation seems on track to follow the same path as privacy/data collection — with the states, the courts and the industry itself trying to fill the void. Certain states in particular are leading the regulatory charge in the absence of federal action.

To date, at least 12 states — including California, Colorado, Connecticut, Illinois, New York, Utah, and Washington — have enacted or proposed laws to regulate AI to some degree. This is likely to create complex and potentially inconsistent regulations for businesses building or using AI, or considering doing so, especially those that operate or have customers in multiple states. For consumers, these state initiatives could increase individual protection and transparency in how AI is used in consumer facing products and services.

For example, the Connecticut AI bill, SB 2, is proposed legislation that regulates private sector deployment and use of artificial intelligence systems. The SB 2 bill would establish a framework concerning the development, deployment, and use of certain AI systems in “high-risk” situations where AI makes a consequential decision that has a significant impact on a person. Such high-risk situations include criminal justice and access to education, areas where states historically have taken the legislative lead. This may mean that these applications of AI will continue to be regulated at the state level even in the event of future federal regulation. Other points of friction are likely to arise between potentially conflicting AI regulations from the different states plus the federal government.

As the federal government continues to work towards comprehensive AI regulation, the innovative solutions being developed at the state level provide valuable insights and potential models for future legislation.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© ArentFox Schiff | Attorney Advertising

Written by:

ArentFox Schiff
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

ArentFox Schiff on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide