[Podcast] Five Things Developers Should Remember About New State AI Laws in Health Care

Ropes & Gray LLP
Contact

Ropes & Gray LLP

On this Ropes & Gray podcast, health care partner Jamie Darch and associate Andrea Millard introduce the Health AI Atlas, a new online resource designed to help health care stakeholders navigate the complex and rapidly evolving landscape of state laws regulating artificial intelligence. The discussion focuses on the top five things developers in the health care sector need to know about compliance, including understanding which laws apply, implementing robust compliance programs, ongoing risk monitoring, regulatory reporting, and the potential penalties for non-compliance. Jamie and Andrea also highlight the importance of considering broader privacy statutes that impact automated decision-making and profiling, even when not explicitly labeled as AI laws. With enforcement expected to increase, developers are urged to stay informed and proactive in meeting these new obligations.


Transcript:

Andrea Millard: Hello, and welcome to today’s podcast series, focusing on our new website resource, the Health AI Atlas. My name is Andrea Millard, and I am an associate in Ropes & Gray’s health care practice group. With me today is Jamie Darch, a partner in the health care practice group. Today, we’re focusing on the fast-evolving landscape of state laws regulating artificial intelligence, specifically, what are the top-five things developers in the health care sector need to know. Before we get started, Jamie, do you want to explain what the Health AI Atlas is?

Jamie Darch: Yes. The Health AI Atlas is a resource we created on the Ropes & Gray website to help our clients across the health care industry—from investors, to health care providers, to payors, to health IT developers—understand how AI laws apply to their current companies or prospective investments. While at the federal level, the Trump administration has continuously pushed to de-regulate AI, we are still seeing a wide proliferation of state laws targeting the use of AI by health care stakeholders. This creates an evolving patchwork of regulatory requirements that can be difficult to navigate, so we created the Health AI Atlas to help guide clients through the regulatory thicket.

Andrea Millard: And, we should mention, the Trump administration is actively trying to tear down this thicket and has released an Executive Order directing federal agencies to challenge state laws on preemption grounds, which we’ll cover in more detail in a later podcast. But for now, the state laws stand, and many require compliance now. So, in addition to the Health AI Atlas, we are launching this podcast series to provide digestible summaries of what we are seeing, boiling it down to five key things to know. As I mentioned, this week we are starting with developers. And before we dig into the AI laws as relevant to developers in the health care space, could you please explain who we mean by “developers?”

Jamie Darch: Sure. When we say “developers,” we mean anyone in the health care IT sector who creates, builds, or customizes AI tools, whether you’re a technology vendor, a hospital system building your own chatbot, or a digital health startup.

Andrea Millard: That makes sense. So, let’s discuss the five key takeaways for developers in the health care space. The first thing is scope: how do you know if the AI you are developing is subject to one of these laws?

Jamie Darch: Yes, that is the threshold challenge for developers, as different states are taking different approaches. There are some states, like California and Texas, that are imposing requirements on any developers of generative AI; however, other states, like Colorado, are only imposing requirements on developers of AI that is designated as “high-risk” because it’s used to make consequential decisions impacting people, like health care and insurance decisions.

Notably, while many states have enacted AI laws, only a handful directly regulate developers. However, a number of states impose requirements on deployers of AI that will likely be contractually passed on to developers, because deployers are not really in a position to meet these obligations. For example, we’re seeing a lot of laws requiring deployers to provide documentation to end users of AI systems about their algorithms. Deployers—like providers and payors—are not necessarily going to be in a position to meet these requirements without support from the developers who built the systems.

Our advice for developers would be to carefully review which requirements are likely to apply not only directly, but also indirectly based on the type and location of end user that they’re targeting.

Andrea Millard: That makes sense—companies need to start out with a clear picture of what requirements directly and indirectly apply to them. So, what’s the second thing developers should remember?

Jamie Darch: So, second, developers should be prepared to implement a compliance program at the launch of a product. AI developers trying to launch an AI system in 2026 need to ensure they have implemented a compliance program before launching.

The laws vary in how explicit they are about what constitutes a comprehensive and adequate AI compliance program. For developers that are going to be operating across states, the minimum elements should be:

  • Taking a comprehensive inventory of your AI offering and how they are intended to be used by various deployers.
  • Implementing policies, contracts, and data governance practices to address discrimination, biometric data, and transparency requirements.
  • And, adopting a risk management framework for AI development, like the NIST AI Risk Management Framework.

These laws also require developers to make sure they are documenting, for each AI model, training data, outputs, performance metrics, and known limitations. California, in particular, requires developers to post documentation on their website about training data—including sources and ownership of the data, before launch.1 Similarly, in New York, before deploying certain AI models, developers must publish their written safety and security protocols publicly.2

Other states, like Colorado, require developers to provide documentation to deployers covering data sources, system limitations, evaluation methods, risk mitigation strategies—before the system is deployed.3

Andrea Millard: Okay, so assuming a developer wades through all these laws, identifies which ones apply to its products, and implements a compliance program, are they done?

Jamie Darch: Unfortunately, no—the third thing to remember is that developers must actively monitor their AI systems for risks, not just at launch but throughout the lifecycle, in order to comply with ongoing transparency and notification obligations. Colorado and Texas require developers to continue monitoring AI systems for accuracy, safety, and misuse post-deployment. Under these laws, companies must keep documentation about system capabilities, limitations, risks, and mitigation steps up to date. In addition, some states specifically require developers to annually review their AI compliance programs to account for changes in model capabilities and industry best practices. For example, New York requires developers to annually retain a third party to perform an independent audit of compliance with the law’s requirements.

These laws are focused on proactive, ongoing risk management, not just a one-time compliance check.

Andrea Millard: Good to know. What is the fourth thing to remember? Are there also obligations to make reports or submissions to government agencies?

Jamie Darch: Yes, disclosures and submissions to regulatory agencies are front and center. Most of the states we have been talking about—including New York, California, and Colorado—require disclosures to regulators in the event a developer identifies a compliance issue, such as an AI system causing discrimination or safety incidents. Each state has their own set of definitions and triggers for regulatory reporting, so developers need to implement incident response procedures that consider state-by-state regulatory requirements, similarly to how we think about responding to data breaches given the patchwork of state data privacy requirements.

In addition to requiring companies to make submissions, states also include mechanisms for state regulators to launch investigations into, and demand information from companies regarding compliance with these laws—whether as a result of an incident or as just a prophylactic compliance measure.

Andrea Millard: Okay, so there are going to be touch points with regulators. I imagine all these touch points, there will be some potential for enforcement. What do penalties in this space look like?

Jamie Darch: Yes, that’s the fifth thing to remember: non-compliance could cost you. In Texas, uncurable violations can result in fines of up to $200,000 per incident, with additional daily penalties for ongoing noncompliance. Utah allows per-violation fines of between $2,500 and $5,000. And New York will not be outdone, as New York law gives the Attorney General the ability to issue up to $10 million in penalties. While a lot of these laws are new and there has not been much enforcement to date, developers need to keep these requirements in mind now, as we expect enforcement to ramp up in the coming months.

Andrea Millard: So, we have primarily discussed laws that expressly target the development and use of artificial intelligence, but are there laws that are not AI-specific that developers should also be aware of?

Jamie Darch: Great question. States like Maryland, Montana, Oregon, Tennessee, and Virginia all have privacy statutes that govern automated decisions, profiling, and sensitive data. While the definitions vary by state, “automated decisions” generally refer to outcomes produced by an AI system without direct human intervention, that affect individuals’ rights or access to services, and “profiling” means any automated processing of personal data to evaluate, analyze, or predict aspects about an individual. These laws often require explicit consent and transparency for automated processing, and developers need to be aware of these obligations because they can apply to AI-driven features, like chatbots, even if “AI” isn’t mentioned directly in the law. Texas’s general data privacy law, for example, applies to automated processing used in health decisions, and several states define “profiling” in ways that capture AI-powered personalization.

Andrea Millard: Good to know. Jamie, any final advice for developers navigating this patchwork of state laws?

Jamie Darch: Stay informed and proactive. The regulatory landscape is evolving quickly, and requirements can differ significantly by state. Regularly review your compliance posture, update your documentation, and make sure your team is trained on these new obligations. If you’re developing AI tools for health care, especially those that interact with patients or handle sensitive data, you need to treat compliance as a core part of your development process, not an afterthought.

Andrea Millard: Thank you, Jamie, for breaking down these complex requirements. If you’d like more information on this topic or from our health care group, please don’t hesitate to contact us or visit our website. Our website’s interactive map provides details about the various enacted state health care AI laws nationwide. You can look out for our upcoming podcasts on key AI considerations for payors and providers by subscribing or listening wherever you get your podcasts, including Apple and Spotify. Thanks again for listening.

  1. Cal. Civ. Code § 3111
  2. Responsible AI Safety and Education (“RAISE”) Act
  3. Colo. Rev. Stat. § 6-1-1701 et seq.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ropes & Gray LLP

Written by:

Ropes & Gray LLP
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Ropes & Gray LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide