California’s New Frontier AI Law: What General Counsel Need to Know

Harris Beach Murtha PLLC
Contact

Overview of SB 53 – California’s AI Safety Law

California recently enacted Senate Bill 53 (SB 53), the Transparency in Frontier Artificial Intelligence Act, which is the first comprehensive state law addressing safety of powerful AI systems. Governor Gavin Newsom signed SB 53 on September 29, 2025, after a year of intense debate. The law takes effect Jan. 1, 2026, and imposes significant new requirements on developers of the most advanced AI models (dubbed “frontier models”). It aims to strike a balance between fostering AI innovation and installing “commonsense guardrails” to protect public safety.

Who is covered? SB 53 primarily targets “large frontier developers” – companies with more than $500 million in annual revenue that develop frontier AI models using extremely high computational resources. In practice, this means AI labs like OpenAI, Meta, Anthropic and similar organizations developing foundation models at the cutting edge. A “frontier model” is defined by a technical threshold (training runs exceeding 10^26 operations) to capture only the most advanced, large-scale AI systems. Smaller AI projects or ordinary business AI applications are not directly regulated by this law. However, SB 53’s standards are expected to influence industry norms well beyond these few giant AI labs.

Key Requirements: Transparency, Risk Frameworks, Incident Reporting

For those organizations that do fall under SB 53, the law mandates a robust AI safety governance regime. The major obligations include:

Publish a Frontier AI Safety Framework: Each large frontier developer must create and publicly disclose a “frontier AI framework” on its website. This is essentially a comprehensive AI safety plan. The framework should describe how the company is incorporating national and international standards (e.g. NIST’s AI Risk Management Framework) and industry best practices into its development process. It must cover risk assessment procedures – for example, how the company will identify potentially catastrophic risks (such as threats to life or critical infrastructure) that could arise from its AI model’s capabilities. It also must detail mitigation measures and testing protocols (e.g. red-teaming and safety evaluations) used to address those risks.

Importantly, developers are encouraged to align these disclosures with widely recognized standards for AI safety and security. The framework must be kept up to date (reviewed at least annually and revised as needed) and any material changes must be published with an explanation.

Pre-Release Transparency Reports: Before deploying any new frontier model (or a major update to an existing model), companies must publish a transparency report. This report functions like a detailed model card for the AI system. It must include basic facts (the model’s release date, supported languages and modalities, intended uses and use restrictions).

Crucially, it must also summarize all the risk assessments the company conducted on that model and the results of those assessments, including whether any independent third-party evaluators were involved in testing the model’s safety. In other words, before launching a powerful AI model in public, the developer has to “show its work” on safety and disclose what catastrophic risks were considered (for example, the AI’s potential to facilitate bioweapons or autonomous cyberattacks) and how those risks have been addressed.

Critical Incident Reporting Mechanism: SB 53 creates a formal channel to report “critical safety incidents” involving AI. California’s Office of Emergency Services (Cal OES) is tasked with establishing a mechanism for both the public and AI developers to report serious AI-related incidents. Under the law, a “critical safety incident” generally means any event where the AI’s behavior leads to death, serious physical injury, significant property damage or other major harm – for instance, if an AI system operates autonomously in a way that would equate to serious crimes if done by a human. If a frontier AI developer discovers such an incident with one of its models, it must notify Cal OES within 15 days. In cases of truly imminent danger (for example, an AI system manifesting dangerous capabilities in real time a la SkyNet), the developer must quickly alert relevant authorities (such as law enforcement) within 24 hours.

The Office of Emergency Services will review these incident reports, and it can share anonymized, aggregated incident data in annual reports to the governor and legislature (to inform policymakers about emerging AI risks). All such reports in California are exempt from public disclosure laws to encourage candid sharing of problems without fear of revealing trade secrets or sensitive security info.

Notably, SB 53’s incident reporting goes beyond what even the EU’s AI Act presently requires – for example, companies must report AI-enabled crimes carried out without human oversight or deceptive behavior by their AI, categories that the EU law does not explicitly cover.

Whistleblower Protections: To bolster internal accountability, SB 53 provides strong whistleblower protections for employees of frontier AI developers. Companies cannot retaliate or gag any employee (“covered employee”) who in good faith discloses to authorities or internally that the company’s AI activities pose a “specific and substantial danger” to public health or safety, or that the company is violating the AI safety law. SB 53 actually requires large AI companies to set up an anonymous internal reporting process for staff to raise such concerns to management, with a mandated follow-up procedure. Employees can also report externally, e.g. directly to the state Attorney General or via a whistleblower hotline. These provisions recognize that the engineers and safety teams inside AI labs are often the first to know of looming risks, so the law encourages an “if you see something, say something” culture regarding AI hazards.

Ongoing Compliance & Enforcement: SB 53 is enforced by the California Attorney General, and violations carry potentially steep penalties. Each failure to meet a requirement (for example, not publishing a safety framework, or making materially misleading statements about AI risks) can incur civil fines up to $1 million per violation. The Attorney General can also seek injunctions – meaning a court could order a company to halt deployment of a non-compliant AI model until it satisfies the safety requirements.

Why SB 53 Succeeded When SB 1047 Failed

It’s worth comparing SB 53 with its predecessor, SB 1047, which was a broader AI bill vetoed by Governor Newsom in 2024. SB 1047, the Safe and Secure Innovation for Frontier AI Models Act, attempted to mandate even more sweeping controls on AI labs. It would have applied to any AI model costing over $100 million to train and set up a dedicated state “Board of Frontier Models” to oversee compliance. It also would have regulated those providing the cloud computing power for AI training – essentially pulling in cloud providers as potentially liable parties.

These features went beyond SB 53’s approach, and they provoked fierce pushback. Tech companies and venture capital groups lobbied hard against SB 1047, arguing that such strict rules would stifle innovation and should be handled at the federal level.

How SB 53 Compares to the EU AI Act

It is instructive to compare California’s law with the EU Artificial Intelligence Act, which was finalized in 2024. The EU AI Act is a sweeping regulatory framework that applies to a broad range of AI systems across all EU countries. Unlike SB 53’s narrow focus on frontier models and AI labs, the EU AI Act uses a risk-based classification for all AI applications – including deployers of AI programs instead of just developers.

Potential National Impact and Other State Initiatives

Just as California’s consumer privacy law (CCPA) became a de facto national standard in the absence of federal privacy legislation, SB 53 may spur a similar dynamic in AI governance. Companies that operate nationally or globally often prefer to implement one consistent compliance program. It is likely easier for a large AI developer to follow SB 53’s requirements across all markets, rather than only for California users, especially since the disclosures and safety practices in SB 53 have public benefit. In fact, SB 53 explicitly allows compliance via equivalent federal standards if those emerge – it has a provision that if a federal law or regulation imposes similar incident reporting or risk management rules, California will accept compliance with that as compliance with SB 53. These provisions hint at the desire to avoid duplication once a national framework is in place.

Other states are already following California’s lead. Notably, New York lawmakers passed the “Responsible AI Safety and Education Act” (RAISE Act) in June 2025, which closely mirrors SB 53 in targeting frontier AI models. The RAISE Act (which is currently awaiting Governor Hochul’s decision) would require major AI labs to publish safety protocols, conduct third-party audits and report safety incidents to the NY Attorney General within 72 hours of occurrence.

In summary, SB 53 represents a new baseline for what California (and possibly other states) will expect of those developing advanced AI systems. By voluntarily adopting the “strictest common” standards (such as California’s) outside jurisdictions where they apply, companies can reduce legal uncertainty and compliance obligations while building public trust.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Harris Beach Murtha PLLC

Written by:

Harris Beach Murtha PLLC
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Harris Beach Murtha PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide