California set a new baseline for transparency in “frontier” AI this week. On September 29, 2025, Governor Newsom signed SB 53, Transparency in Frontier Artificial Intelligence Act (“TFAIA” or the “Law”). According to the governor’s press release on the bill signing, TFAIA is “the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.” TFAIA, which was authored by Senator Scott Wiener, is intended to ensure the safety of a “foundation model” developed by a “frontier developer,” requiring transparency, reports of potential “critical safety incidents,” and protections for whistleblowers. TFAIA provides for a civil penalty for noncompliance with the TFAIA to be enforced by the Attorney General. The Law takes effect January 1, 2026.
Below are highlights of the TFAIA:
Transparency Requirements
Frontier AI Framework
TFAIA requires a large frontier developer to write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer’s frontier models. The frontier AI framework must be reviewed and, as appropriate, updated at least once per year and must describe how the large frontier developer approaches all of the following:
- Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework.
- Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds.
- Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to the Law.
- Reviewing assessments and adequacy of mitigations as part of the decision to deploy (i.e., make available to a third party for use, modification, copying, or combination with other software except where the primary purpose is to develop or evaluate the frontier model) a frontier model or use it extensively internally.
- Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
- Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to the Law.
- Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties.
- Identifying and responding to critical safety incidents.
- Instituting internal governance practices to ensure implementation of these processes.
- Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Transparency Reports
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, the Law requires:
- A frontier developer to clearly and conspicuously publish on its internet website a transparency report including: the internet website of the frontier developer; a mechanism for individuals to communicate with the frontier developer; the frontier model’s release date; and the intended uses of the frontier model. A system card or model card that includes the necessary items satisfies the requirement. While redactions are permitted to protect trade secrets, cybersecurity, public security, or national security, the frontier developer must describe the character of and justification for the redaction in the report and retain unredacted information for five years.
- A large frontier developer to include in the required transparency report summaries of specified information, including: assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework; the results of those assessments; and the extent to which third-party evaluators were involved.
Assessment of Catastrophic Risk
A large frontier developer must transmit to the Office of Emergency Services (California’s public safety and emergency management agency, “Cal OES”) a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicate in writing to Cal OES with written updates, as appropriate.
Safety/Reporting Potential Critical Safety Incidents
TFAIA requires Cal OES to establish two reporting channels:
- Critical safety incident reporting. Cal OES must establish a mechanism for frontier developers and members of the public to report critical safety incidents. The reports must capture: (i) the date of the critical safety incident; (ii) the reasons the incident qualifies as a critical safety incident; (iii) a short and plain statement describing the incident; and (iv) whether the incident was associated with internal use of a frontier model.
- Confidential risk assessments. The Law also provides that Cal OES maintain a mechanism for large frontier developers to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of their frontier models.
A frontier developer must report any critical safety incident pertaining to its frontier models to Cal OES within 15 days of discovering the critical safety incident. However, if a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer must disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
Cal OES may designate other federal laws, regulations, or guidance documents that impose or state standards or requirements for critical safety incident reporting that are “substantially equivalent” to or stricter than those required by the critical safety incident reporting section of the Law or that are intended to assess, detect, or mitigate the catastrophic risk. A frontier developer that intends to comply with the critical safety incident reporting requirements of the Law by complying with the requirements of such designated documents must declare their intent to do so to Cal OES. After doing so, the frontier developer will be deemed in compliance with the critical safety incident reporting requirements of the Law if it complies with the requirements or failure to comply with the requirements will constitute a violation of the critical safety incident reporting section of the Law.
Whistleblower Protections
TFAIA provides protections for whistleblowers working with foundation models. The Law prohibits a frontier developer from making, adopting, enforcing, or entering into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses that the frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk, as defined, or that the frontier developer has violated the TFAIA.
Large frontier developer must provide a reasonable anonymous internal process for covered employees to file a report if the covered employee believes in good faith that the information indicates that the large frontier developer’s activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated the TFAIA. The whistleblower is entitled to a monthly update on the investigation status and actions taken in response.
Frontier developers also must provide employees who are responsible for assessing, managing, or addressing risks of critical safety incidents with a clear notice of their rights and responsibilities under TFAIA. The notice must be posted in the workplace and provided to and acknowledged by such employees annually.
Penalties and Remedies
A large frontier developer that fails to publish or transmit a compliant document required under the Law, makes a statement in violation of the Law, fails to report an incident as required by the Law, or fails to comply with its own frontier AI framework will be subject to a civil penalty in an amount dependent upon the severity of the violation that does not exceed one million dollars ($1,000,000) per violation. The Law expressly provides that the civil penalty can only be recovered in a civil action brought by the Attorney General.
Publicity
TFAIA has been a closely watched bill, especially after Senator Wiener had introduced another version of the legislation last year that the governor ultimately vetoed. TFAIA is based on an AI safety study commissioned by Governor Newsom earlier this year. In announcing the TFAIA, the governor leaned into the message that regulations need to strike a balance of enabling innovation and building public trust. He emphasized the need to keep America at the forefront of technology. According to the press release, California is the “birthplace of AI” and the home to 32 of the 50 top AI companies worldwide, and California “leads U.S. demand for AI talent.”
There has been strong industry opposition to the TFAIA. Prior to the adoption of the Law, industry associations such as the California Chamber of Commerce, the Computer & Communications Industry Association (CCIA), and TechNet, in a joint coalition letter to Senator Wiener, expressed their opposition to the bill unless it was amended. The Consumer Technology Association (CTA) urged Governor Newsom to veto SB 53; the Chamber of Progress submitted an opposition letter to the Assembly Privacy Committee; and the Software & Information Industry Association (SIIA) also issued a statement opposing SB 53.
CalCompute
The Law establishes within California’s existing Government Operations Agency a consortium to establish a framework for the creation of a public cloud computing cluster called “CalCompute.” The framework will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by at minimum fostering research and innovation that benefits the public and enabling equitable innovation by expanding access to computational resources. The consortium is intended to be established within the University of California.
Key Takeaways
If your company does business in California and develops foundation models, the following are important considerations:
- Determine whether you are a frontier developer and a large frontier developer under the Law.
- Prepare for the transparency requirements under the Law, including the creation of a frontier AI framework to be reviewed at least annually.
- Review AI governance and documentation practices to determine whether any existing governance and compliance frameworks may be leveraged for the new requirements.
- Establish a process for identifying incidents that qualify as critical safety incidents under the Law and the appropriate procedures for reporting such incidents.
- Update whistleblower reporting mechanisms and ensure that employees are provided notice of their rights under the Law.
Definitions
- Catastrophic risk: a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following: (i) providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; (ii) engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; and/or (iii) evading the control of its frontier developer or user.
- Critical safety incident: any of the following: (i) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (ii) harm resulting from the materialization of a catastrophic risk; (iii) loss of control of a frontier model causing death or bodily injury; or (iv) frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
- Foundation model: an AI model that is all of the following: (i) trained on a broad data set; (ii) designed for generality of output; and (iii) adaptable to a wide range of distinctive tasks.
- Frontier AI framework: documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
- Frontier developer: a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications in the Law.
- Frontier model: a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. The quantity of computing power must include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model.
- Large frontier developer: a frontier developer that together with its affiliates collectively had annual gross revenues in excess of $500 million in the preceding calendar year.
On or before January 1, 2027, and annually thereafter, the Department of Technology will make recommendations about whether and how to update the definitions of frontier model, frontier developer, and large frontier developer.
[View source.]