Colorado’s Historic SB 24-205 Concerning Consumer Protections in Interactions with AI Signed Into Law, After Passing State Senate and House

Epstein Becker & Green
Contact

Epstein Becker & Green

On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3, and the House of Representatives passed the bill on May 8.  In a letter to the Colorado General Assembly, Governor Polis noted that he signed the bill into law with reservations, hoping to further the conversation on artificial intelligence (AI) and urging lawmakers to “significantly improve” on the law before it takes effect.

SB 24-205 will become effective on February 1, 2026, making Colorado the first state in the nation to enact broad restrictions on private companies using AI. The measure aims to prevent algorithmic discrimination affecting “consequential decisions”—including employment-related decisions.

The Colorado legislation adds a new part 17, “Artificial Intelligence,” to Article I, Title 6 of the Colorado Consumer Protection Act. Section 6-1-1702 requires “developers”—and 6-1-1703 requires “deployers”— of high-risk artificial intelligence systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the use of a high-risk AI system.

Colorado’s attorney general has exclusive authority to enforce the measure.

The bill delineates between “developers” and “deployers.” A “developer” is defined as a person doing business in the state who develops or substantially modifies an AI system. A “deployer,” meanwhile, means a person doing business in the state who deploys a high-risk AI system. The bill focuses on “high  risk” AI systems involved in making consequential decisions, imposing a duty on developers and deployers to avoid algorithmic discrimination in the use of such systems.  While employers may not be developers, they will almost certainly be deployers of high-risk AI systems, particularly in hiring. We will focus on those provisions of the bill.

Risk Management Policy/Program. On or after February 1, 2026, deployers of a high-risk AI system must implement a risk management policy and program to govern the deployment of any high-risk AI system. Among other things, the policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.  The policy and program must be systematically reviewed and updated; and must be reasonable considering factors such as national/international guidance and standards (or any risk management framework that the attorney general may designate), the size and complexity of the deployer, the nature and scope of the high-risk AI systems deployed, including the intended uses of the high-risk AI systems, and the sensitivity and volume of data processed in connection with the high-risk AI systems deployed by the deployer.

Impact Assessment. A deployer, or a third party contracted by the deployer, who deploys a high-risk AI system on or after February 1, 2026, must complete an impact assessment for the high-risk AI system, at least annually and within 90 days after any intentional and substantial modification to the high-risk AI system is made available. The impact assessment must include, among other things, a statement disclosing the purpose, intended use cases, deployment context of, and benefits afforded by, the high-risk AI system; an analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination, and, if so, the nature and the steps that have been taken to mitigate the risks; a description of the categories of data the AI system processes as inputs and the outputs the system produces; an overview of the categories of data the deployer used to customize the high-risk AI system; metrics used and transparency measures taken concerning the high-risk AI system; and a description of the post-deployment monitoring and user safeguards provided concerning the high-risk AI system.

Modifications. An impact assessment following an intentional and substantial modification to a high-risk AI system must include a statement disclosing the extent to which the high-risk AI system was used in a manner that was consistent with, or varied from, the developer’s intended uses.

Number of Assessments. A single assessment may assess a comparable set of high-risk AI systems; and a reasonably similar impact assessment completed for another law or regulation may suffice.

Review and Recordkeeping. The most recently completed impact assessment, and all prior impact assessments, must be maintained for at least three (3 years following the final deployment. The deployer or a third-party contracted by the deployer must review the deployment of each high-risk AI system at least annually to ensure that it is not causing algorithmic discrimination.

Notification to Consumers. A deployer who deploys a high-risk AI system to make, or be a substantial factor in making, a consequential decision concerning a consumer (defined as merely an individual who is a Colorado resident) must notify the consumer that the deployer has used a high-risk AI system to make a consequential decision; provide a statement disclosing the purpose of the high-risk AI system and the nature of the consequential decision; a statement disclosing the principal reason or reasons for the consequential decision, including the degree to which the AI system contributed, and the type and sources of data.  A deployer must also provide the consumer with an opportunity to correct any incorrect personal information that the AI system processed; an opportunity to appeal an adverse consequential decision; and more. Notice shall generally be required directly to the consumer, in plain language, in all languages which the deployer typically uses in the ordinary course of business, and in a format that is accessible to consumers with disabilities.

Websites. Like developers, deployers will have specific requirements concerning statements on their websites, including the types of AI systems currently deployed and how risks of algorithmic discrimination are managed.

Exceptions. The provisions concerning the risk management policy and program, the impact assessment, and the website notices do not apply to deployers, among others, employing fewer than 50 full-time employees that do not use the employer’s own data to train the high-risk AI system.

Discovery. Like developers, if a deployer deploys a high-risk AI system on or after February 1, 2026, and subsequently discovers that the system has caused algorithmic discrimination, the deployer, no later than 90 days after the date of the discovery, shall send to the attorney general a notice disclosing the discovery.

Additional Disclosures to Consumers. A deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an AI system that is intended to interact with consumers, must disclose to each consumer that they are interacting with an AI system. Deployers, however, do not need to make an explicit disclosure where it would be obvious to a reasonable person that the person is interacting with an AI system.

Developers and deployers alike can meet a rebuttable presumption by disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, and complying with specific provisions of the bill. Developers and deployers also have an affirmative defense with respect to both high-risk and generative systems if they have implemented and maintained a program that complies with a nationally or internationally recognized risk management framework for AI systems that the bill or the attorney general designates, and the developer or deployer takes specified measures to discover and correct violations of the bill.

Takeaways


Employers, in particular, should be aware that the Colorado measure is part of a nationwide push to prevent algorithmic discrimination in the use of AI—where the use of the system results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of “age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification” protected under state or federal law.

While the Colorado bill will make Colorado the first state in the nation to enact broad restrictions on private companies using AI, as we have reported previously, in July 2023, New York City began enforcing Local Law 144, which regulates employers’ use of “automated employment decision tools” to make hiring and promotion decisions in New York City. That law requires covered employers to provide notice to candidates of the use of automated employment decision tools, and to conduct annual bias audits of such tools. Also check out our latest blogs on the subject of AI resume screening tools and federal anti-discrimination laws, as well as on the extension of antidiscrimination provisions of the Affordable Care Act to patient care decision support tools, including algorithms; San Francisco’s generative AI guidelines for city workers; insurance underwriting and pricing in New York State; and “Achieving Legal Compliance in AI: Minimizing Bias in Algorithms.”

The Colorado measure resembles one in Connecticut, SB 2—which also would require developers and deployers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. The Connecticut measure passed the Senate and was in the House when that legislative session ended, also on May 8, 2024.  

Regardless of whether the Colorado General Assembly heeds the request of Governor Polis to revise the bill before it takes effect in 2026, developers, deployers, and employers alike should be aware of the increasing regulatory focus on the use of AI in the workplace and elsewhere. Employers in particular should begin implementing an AI governance framework and establish plans for the implementation and monitoring of any AI tools.

Further information on Colorado SB 24-205 can be found on our sister publication, Health Law Advisor, here.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Epstein Becker & Green | Attorney Advertising

Written by:

Epstein Becker & Green
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide