What Happened?
On June 12, 2025, the New York State legislature passed the Responsible AI Safety and Education (RAISE) Act, which awaits Governor Kathy Hochul’s signature or veto. The RAISE Act addresses developers of “frontier” AI models—those large AI models that cost over $100 million or use massive compute—and it aims to reduce the risks of “critical harm,” or the death or serious injury to 100 or more people, or causing $1 billion or more in damages. The law applies only to large-scale frontier AI models, and it excludes smaller AI models and start up initiatives.
Why is it Important?
If the RAISE Act is signed by the Governor, it would mean that:
- Developers of fronter AI models must create robust safety and security plans before making those models available in New York; publish redacted versions of those plans; retain unredacted copies; and permit annual external reviews and audits.
- Any “safety incident”—from model failure to unauthorized access—must be reported to New York’s Attorney General and the New York Division of Homeland Security within 72 hours.
- The New York AG can penalize violations: up to $10 million for a first offense and $30 million for repeat infractions.
- Employees and contractors are protected when reporting serious safety concerns.
What to do Now?
Pursuant to the New York Senate rules, the RAISE Act must be delivered to the Governor by July 27, 2025; once delivered, Governor Hochul will have 30 days to sign or veto the bill. If it is signed, it will take effect 90 days later.
If it is enacted, in-scope AI firms and developers will need to ensure appropriate internal protocols, engage with third-party auditors, and maintain incident reporting and whistleblower channels.
[View source.]