Colorado Enacts Artificial Intelligence Legislation Affecting AI Systems Developers, Deployers

Jackson Lewis P.C.
Contact

Enacting what is perhaps the first comprehensive regulation of artificial intelligence (AI) at the state level in the United States, Colorado’s governor signed the Artificial Intelligence Act, Senate Bill (SB) 24-205, on May 17, 2024.

Colorado is not alone in advancing AI regulation. The new law joins the Equal Employment Opportunity Commission’s technical assistance documents, the Department of Labor’s recent pronouncement, the New York City automated employment decision tools law, Tennessee’s regulation of deepfakes, Illinois’ Artificial Intelligence Video Interview Act, Maryland’s facial recognition law, and the European Union’s AI Act, to name a few.

SB 24-205 targets “developers” and “deployers,” categories that will very likely reach organizations in their capacities as employers. It requires both to use reasonable care to avoid algorithmic discrimination in high-risk artificial intelligence systems. In each case, satisfying specified provisions will establish a rebuttable presumption that reasonable care was used, creating a compliance roadmap for deployers and developers to consider.

The statute takes effect on February 1, 2026.

Definition of High-Risk AI Systems

Under the statute, high-risk AI systems are defined as those that when used make or are a substantial factor in making “consequential” decisions. Consequential decisions could affect:

  • Employment or employment opportunity
  • Education enrollment or opportunity
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

High-risk systems do not include AI that either (i) performs narrow procedural tasks or (ii) detects decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence human assessment or review.

The statute also excludes certain technologies, such as cybersecurity technologies and spam filtering, from high-risk AI systems when they are not making or a substantial factor in making consequential decisions.

Developer Obligations

A developer as defined in the statute is a person doing business in Colorado who develops or intentionally and substantially modifies an AI system.

Under the statute, developers of high-risk AI systems are required to use reasonable care to avoid algorithmic discrimination. The statute includes a rebuttable presumption that a developer used reasonable care if the developer complied with specific requirements, including:

  • Making available to deployers of the AI system a statement disclosing specified information about the system
  • Providing deployers the information and documentation necessary to complete an impact assessment
  • Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development
  • Disclosing to the state attorney general and known deployers any known or reasonably foreseeable risk of algorithmic discrimination within 90 days after discovery

Deployer Obligations

A deployer is a person doing business in Colorado who uses high-risk AI systems. Deployers such as employers using AI for certain purposes also have an obligation to use reasonable care to avoid algorithmic discrimination.

As with developers, there is a rebuttable presumption that deployers used reasonable care if they comply with the following:

  • Implement a risk management policy and program for high-risk AI systems
  • Complete an impact assessment of high-risk AI systems
  • Notify consumers of specified items if the high-risk systems make decisions about a consumer
  • Make a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys
  • Disclose to the attorney general the discovery of algorithmic discrimination within 90 days of discovery

Certain deployers may be exempt from requirements such as notice to consumers if at the time of deployment and at all times while using high-risk AI systems:

  • They employ fewer than 50 full-time equivalent employees;
  • Do not use their own data to train the AI system;
  • The AI system is used for the intended uses disclosed by the developer; and
  • Makes certain information related to impact assessment available to consumers.

Enforcement

SB 24-205 makes clear there is no private right of action under the law, leaving exclusive enforcement to the state attorney general’s office. The attorney general’s office also has discretion under the statute to implement further rulemaking.

Takeaways

There is no time like the present to consider risk management plans and disclosures for the use of AI, particularly in areas like employment and especially as the federal government sharpens its focus on the use of AI and other states consider similar requirements.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Jackson Lewis P.C. | Attorney Advertising

Written by:

Jackson Lewis P.C.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Jackson Lewis P.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide