On December 19, the governor of New York enacted the Responsible AI Safety and Education Act (RAISE Act), amending the state’s general business law to provide transparency requirements and guardrails on the training and use of AI “frontier models,” as defined within the law. The Act requires large developers — those who have trained at least one frontier model and spent over $100 million in compute costs — to implement and retain a written safety protocol before deploying any frontier model. These protocols, along with records of any updates or revisions, must be maintained for as long as the model is deployed plus five years.
The Act prohibits deploying a frontier model if doing so would create an unreasonable risk of critical harm. The state requires annual reviews of protocols to align model capabilities and industry best practices, with material modifications reported. Any safety incident must be disclosed to the attorney general and the Division of Homeland Security and Emergency Services within seventy-two hours.
Violations of the act allow the attorney general to bring civil actions, with penalties up to $10 million for a first-time violation and $30 million for subsequent violations, as well as injunctive or declaratory relief. The Act prohibits knowingly making false or materially misleading statements or omissions in required documents. The Act goes into effect 90 days after becoming law.
[View source.]