[co-author: Stephanie Kozol]*
On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law. The bill will go into effect on January 1, 2026. The act builds upon the recommendations found in the “California Report on Frontier AI Policy,” which was released to the public on June 17, 2025. This report detailed key principles to guide the legislation drafting process, including grounding AI policy in empirical research and providing greater transparency into AI systems. Given that California is home to 32 of the top 50 AI companies worldwide, the state dominates the AI industry. It is no surprise that California is the first state to create rules promoting safety, transparency, and incident reporting for frontier models. This new act is expected to set the stage for similar AI legislation across the U.S.
Frontier AI Models
The act applies to developers of frontier AI models, or any “foundation[al] [AI] model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations,” as defined by the act. This act is the first American legislation to use a minimum threshold of computing power, also called “compute,” to determine the level of AI systems’ advancement and the extent of risk associated with them.
While not all, most of the act’s provisions are intended to regulate large frontier developers, described as “a frontier developer that together with its affiliates collectively had annual gross revenues in excess of $500 million in the preceding calendar year.” Additionally, the AI systems regulated by the act are only the most advanced and powerful models, which use a great deal of computational resources. In practice, this means that only the largest AI labs will be subject to much of the act.
Purposes of the Act
The act is multifaceted and regulates frontier AI models in a variety of ways. Large frontier developers must provide a frontier AI framework that documents technical and organizational protocols to manage, assess, and mitigate catastrophic risk. One of the many statutory requirements of the plan is that each frontier AI developer must describe how the company complies with national standards, international standards, and industry-consensus best practices. These plans must be reviewed and updated as appropriate at least once per year.
New reporting mechanisms are an integral part of the act’s focus on promoting the safe use of such powerful frontier AI models. Before deploying a new or substantially modified model, all frontier developers, not just large ones, must publish a transparency report. This report must include the frontier developer’s website, communication mechanisms, release date, supported languages and output modalities, intended uses, usage restrictions, risk assessment results, including third-party involvement and compliance steps with the AI framework, and additional relevant information.
When a critical safety incident occurs, developers must promptly report it to the California Office of Emergency Services (COES) within 15 days of discovering the incident, or within 24 hours if the incident poses an imminent threat of death or serious injury. A critical safety incident is defined as (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury, (2) harm resulting from the materialization of a catastrophic risk, (3) loss of control of a frontier model causing death or bodily injury, or (4) when a frontier model deceptively bypasses its developer’s controls outside of testing, posing a significantly increased catastrophic risk.
COES is also responsible for creating a system for whistleblowers to report incidents where AI behavior could result in death, injury, or other catastrophic risks, ensuring accountability. Whistleblowers can enforce these protections through civil lawsuits or administrative actions, though relief is limited to injunctions and attorneys’ fees. Companies failing to comply with reporting or disclosure requirements may face civil penalties of up to $1 million per violation, enforced by the attorney general’s office. Companies must adjust their internal human resources policies to take the act’s whistleblower protections into account.
California is offering resources allowing researchers, government agencies, and startup businesses to use frontier AI models, ensuring access for those unable to develop such systems. CalCompute, a publicly owned computing cluster (or group of computers combining their power to function as one) established by the act within California’s Government Operations Agency (GOA), will provide this access. The GOA is due to submit a report to the Legislature by January 1, 2027, to inform the creation and operation of CalCompute. This report will analyze a wide range of topics, including “California’s current public, private, and nonprofit cloud computing platform infrastructure,” an “analysis of the cost to the state to build and maintain CalCompute and recommendations for potential funding sources,” and “[r]ecommendations for the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute.”
The act requires the California Department of Technology (CDOT) to update the law to ensure it adapts to the times. By 2027, and annually thereafter, CDOT will reassess some of the act’s definitions, an acknowledgement that terms like “frontier model” and “(large) frontier developer” may evolve with AI’s rapid progress. In making recommendations, CDOT is required to consider similar thresholds used in federal or international law and incorporate input from multiple stakeholders, technological advancements, and international standards. While CDOT has the authority to recommend updates, the Legislature must approve and adopt the updates. Additionally, the act allows for compliance with emerging federal standards on frontier AI models, ensuring alignment with national AI laws and avoiding redundancy.
Why It Matters
Frontier AI developers must understand the act’s requirements and incorporate any necessary changes to governance, risk assessment, and reporting practices. While the legislation primarily focuses on the largest models currently, even businesses that do not meet the threshold requirements should incorporate the obligations into their compliance programs moving forward. California previously set the stage for a national trend in legislation with the California Consumer Privacy Act. It would not be a surprise if the act serves as an inspiration for similar statutes regulating AI. As AI technology becomes more democratized, it is anticipated that the threshold for those companies subject to AI regulations, such as this one, will decrease as the industry expands.
It is important for companies to recognize at this stage that the transparency, safety, and accountability aspects of the act will require companies to continuously evaluate any potential risks associated with frontier AI usage. For example, if a risk arises, firms must have already-developed protocols for incident response, as the stringent deadlines for reporting critical safety incidents must be adhered to. Even companies that do not utilize frontier models must be aware of the act’s requirements and should build compliance programs with these obligations in mind.
*Senior Government Relations Manager