CO Enacts “High-Risk” AI Law Regulating Deployers and Developers, Including Health Care Stakeholders

Manatt, Phelps & Phillips, LLP
Contact

Manatt, Phelps & Phillips, LLP

On Friday, May 17, Colorado Governor Jared Polis, with noted reservations, signed into law SB205, a consumer protection law which imposes significant requirements on developers and deployers of “high risk” artificial intelligence (AI) systems, requires consumer transparency, and arms the Attorney General with oversight authority. Developers and deployers are defined broadly and would include health care stakeholders, such as hospitals, insurers, and digital health companies, who develop or deploy (use) high risk AI systems, if they are not otherwise exempt. The provisions of this law require developers and deployers to take certain actions by February 1, 2026.

Given Governor Polis’ reservations, SB205 requirements in their current form may not be what is ultimately implemented. In his letter to the Colorado Assembly, Governor Polis expressed concerns about the law’s approach to mitigating algorithmic discrimination, the complex compliance and reporting requirements for developers and deployers, and the potential impact on innovation, and urged the legislature to reconsider some of the law’s provisions in light of concerns raised by industry.

Even if the law is modified before it becomes effective, impacted healthcare stakeholders should begin to prepare to comply as we expect that many of the requirements pertaining to ensuring risk mitigation and transparency will remain and that other states may use this law as a model for their own their legislation.

Who is regulated?

The law sets forth requirements for developers and deployers of “high risk” AI systems. “High risk” AI systems1,2 are defined as those that make, or are a substantial factor in making “consequential decision[s],” which are decisions that have “material legal or similarly significant effect on the provision or denial to any consumer” or the costs or terms of health care services or insurance (among other areas).

What activities are required of developers and deployers?

  • Developers. At a high-level, developers must mitigate algorithmic discrimination3 and ensure transparency between developers and deployers, the public, and the Attorney General. Developers must:
    • Protect consumers from algorithmic discrimination: Developers of high-risk AI systems are required to use “reasonable care4 to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system”.
    • Share information with deployers: Developers of high-risk AI systems must make a lot of information available to deployers, including, among others: the types of data used to train the system and data governance measures used to develop the system; the AI system’s purpose and limitations; intended benefits, uses, outputs, and any known harms or inappropriate uses of the AI system; how the AI system was evaluated for performance and how it should be used and not used; measures taken to mitigate known or foreseeable risks of algorithmic discrimination; and any other information that is necessary for deployers to understand the outputs of the high-risk AI system and to monitor the performance of the system for algorithmic discrimination.
    • Share information publicly: Developers of high-risk AI systems must have a statement on their public website that summarizes the types of high-risk systems that the developer has developed or significantly modified, as well as how the developer manages known or reasonably foreseeable risks of algorithmic discrimination. The developer must update the statement as necessary to ensure its accuracy, and must update it no later than 90 days after the developer “intentionally and substantially modifies” any high-risk systems.
    • Share information with the Attorney General: Developers of high-risk AI systems must share with the Attorney General – “in a form and manner prescribed by the Attorney General” (i.e., not stipulated in the law) – any known or reasonably foreseeable risks of algorithmic discrimination. The Attorney General may also request that the developer share any information the law requires the developer to share with a deployer.
  • Deployers. At a high-level, deployers must mitigate algorithmic discrimination, create and implement a risk management policy and program, complete impact assessments, and provide consumer transparency. Deployers must:
    • Protect consumers from algorithmic discrimination: Similar to the requirements imposed on developers, deployers of high-risk AI systems are required to use “reasonable care5 to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.” At least annually, the deployer (or a third party) is required to review the deployment of each high-risk AI system to ensure that the system is not causing algorithmic discrimination. Additionally, if a deployer discovers that a high-risk AI system has caused algorithmic discrimination, they must disclose that discovery to the Attorney General no later than 90 days after date of discovery.
    • Develop risk management policy and program: Developers are required to implement a risk management policy to govern the deployment of the high-risk AI system, which is required to specify the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy should consider guidance from the Attorney General and from the National Institute of Standards and Technology (NIST) (or other recognized framework for AI) as well as the size and complexity of the deployer, the sensitivity and volume of data, and nature and scope of the high-risk AI systems being deployed.
    • Develop impact assessments: Deployers are required to complete an impact assessment at least annually that includes, but is not limited to: a statement disclosing the purpose, intended use cases, and benefits of the high-risk AI systems; analysis of whether the deployment of a high-risk AI system poses any known or foreseeable risks of algorithmic discrimination; a description of the types of data the high-risk systems use and an overview of any deployer-specific data that the high-risk AI systems use; and a description of post-deployment monitoring and user safeguards, among others.
    • Share information with consumers: Deployers are required to – “[n]o later than the time that a deployer deploys a high-risk AI system to make, or be a substantial factor in making, a consequential decision concerning a consumer” – notify the consumer that a high-risk AI system was deployed; provide the consumer with a statement disclosing the purpose of the high-risk AI system and nature of the consequential decision, contact information for the deployer, and a description of the high-risk AI system; provide information on the consumers’ right to opt out of the high-risk system using their personal data; and communicate the consumers right to appeal an adverse consequential decision, among others. More generally, the law requires that deployers disclose to consumers that they are interacting with a high-risk AI system unless it is obvious to a reasonable person.
    • Share information publicly on their website: Deployers are required to make available on their website a statement summarizing the types of high-risk AI systems that are being deployed, how the deployer is managing known or reasonably foreseeable risks of algorithmic discrimination, and the “nature, source, and extent of the information collected and used by the deployer.”
    • Share information with the Attorney General: Although there are no specific requirements for deployers to submit information to the Attorney General at regular intervals, the law stipulates that the Attorney General may require a deployer to submit the deployer’s risk management policies, impact assessments, or other records developed by the deployer as set forth in this law.

Who is exempt from the law?

  • Deployers are exempt from the law’s requirements if the deployer: (1) employs fewer than 50 full-time employees, (2) does not use their own data to train the high-risk AI system, (3) deploys high-risk AI systems based on the system’s intended purpose as outlined by the developer, (4) does not use their own data to evolve the high-risk AI models (i.e., the system learns based on non-deployer data), and (5) makes available to consumers any impact assessments that the developer has completed and provided to the deployer.
  • There are several exemptions for developers and deployers, which may exempt certain health care stakeholders who develop and/or deploy high-risk AI systems. Most relevant to health care, developers and deployers are exempt if:  
    • they develop or deploy a high-risk AI system that has been “approved authorized, certified, cleared, developed, or granted by a federal agency” (e.g., FDA) and/or is “in compliance with standards established by a federal agency” (e.g., ONC standards); or
    • they are a HIPAA-covered entity  and are providing health care recommendations that are (i) generated by an AI system; (ii) require a health care provider to take action to implement the recommendations; and, (iii) are not considered to be high-risk. It is unclear how (ii) and (iii) may be read together and whether this is a very broad or a quite narrow exemption.
  • AI systems acquired by or for the federal government are exempt, unless the AI system is a high-risk system that is used to make, or is a substantial factor in making, a consequential decision concerning employment or housing.

Who enforces this law?

Colorado’s Attorney General has enforcement authority over requirements outlined in SB205 as well as authority to promulgate rules necessary to implement and enforce the outlined requirements. There is no private right of action under this law.

When does this law going into effect?

SB205 is slated to take effect February 1, 2026. However, based on Governor Polis’ letter to the Colorado General Assembly, we expect there to be several modification to this law before it goes into effect. Governor Polis noted in his letter that “[s]takeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado.” Additionally, the legislature may make refinements based on findings from a recently designated AI impact task force – designated through HB1468, which passed earlier this month, but has not yet been signed – which is focused on consumer protections and includes a health care technology expert member.

Manatt will be monitoring both Colorado’s activity related to this bill and other states who may use this bill as a blueprint for their own AI legislative development.

Key Definitions

Term Definition as defined in SB205
High-Risk Artificial Intelligence System

“High-risk artificial intelligence system” means any artificial intelligence system that, when deployed, make, or is a substantial factor in making, a consequential decision.

“High-risk artificial intelligence system” does not include:

  • An artificial intelligence system if the artificial intelligence system is intended to:
    • Perform a narrow procedural task; or
    • Detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or
  • The following technologies, unless the technologies, when deployed, make or are a substantial factor in making, a consequential decision:
    • Anti-fraud technology that does not use facial recognition technology;
    • Anti-malware;
    • Anti-virus;
    • Artificial intelligence-enabled video games;
    • Calculators;
    • Cybersecurity;
    • Databases;
    • Data storage;
    • Firewall;
    • Internet domain registration;
    • Internet website loading;
    • Networking;
    • Spam- and robocall-filtering;
    • Spell-checking;
    • Spreadsheets;
    • Web caching
    • Web hosting or any similar technology; or
    • Technology that communicates with consumers in natural language or for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
Consequential Decision

“Consequential decision” means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:

  • Education enrollment or an education opportunity;
  • Employment or an employment opportunity;
  • A financial or lending service;
  • An essential government service;
  • Health-care services;
  • Housing;
  • Insurance; or
  • A legal service
Deploy “Deploy” means to use a high-risk artificial intelligence system.

Deployer

“Deployer” means a person doing business in this state that deploys a high risk artificial intelligence system.

Developer “Developer” means a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.
Algorithmic Discrimination “Algorithmic Discrimination” means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of the state or federal law.

1 “‘Artificial Intelligence System’ means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” – SB205

2 “‘High-risk artificial intelligence system’ does not include: (I) an artificial intelligence system if the artificial intelligence system is intended to: (A) perform a narrow procedural task; or (B) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review” among other things.

3 “‘Algorithmic Discrimination’ means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of the state or federal law.” – SB205

4 There is a rebuttal presumption that a developer used reasonable care if they follow the law’s requirements.

5 There is a rebuttal presumption that a deployer used reasonable care if they follow the law’s requirements.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Manatt, Phelps & Phillips, LLP | Attorney Advertising

Written by:

Manatt, Phelps & Phillips, LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Manatt, Phelps & Phillips, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide