A bill passed two chambers of New York State legislature. It incorporated a lot of feedback from this community. This bill’s author actually talked about it as a keynote speaker at an event organized by FAR at the end of May.
There’s no good theory of change for Anthropic compatible with them opposing and misrepresenting this bill. If you work at Anthropic on AI capabilities, you should stop.
We’ve given some feedback to this bill, like we do with many bills both at federal and state level. Despite improvements, we continue to have some concerns
(Many such cases!)
- RAISE is overly broad/unclear in some of its key definitions which makes it difficult to know how to comply
- If the state believes there is a compliance deficiency in a lab’s safety plan, it’s not clear you’d get an opportunity to correct it before enforcement kicks in
- Definition of ‘safety incident’ is extremely broad/unclear and the turnaround time is v short (72 hours!). This could make for lots of unnecessary over-reporting that distracts you from actual big issues
- It also appears multi-million dollar fines could be imposed for minor, technical violations—this represents a real risk to smaller companies
If there isn’t anything at the federal level, we’ll continue to engage on bills at the state level—but as this thread highlights, this stuff is complicated.
Any state proposals should be narrowly focused on transparency and not overly prescriptive. Ideally there would be a single rule for the country.
Here’s what the bill’s author says in response:
Jack, Anthropic has repeatedly stressed the urgency and importance of the public safety threats it’s addressing, but those issues seem surprisingly absent here.
Unfortunately, there’s a fair amount in this thread that is misleading and/or inflammatory, especially “multi-million dollar fines could be imposed for minor, technical violations—this represents a real risk to smaller companies.”
An army of lobbyists are painting RAISE as a burden for startups, and this language perpetuates that falsehood. RAISE only applies to companies that are spending over $100M on compute for the final training runs of frontier models, which is a very small, highly-resourced group.
In addition, maximum fines are typically only applied by courts for severe violations, and it’s scaremongering to suggest that the largest penalties will apply to minor infractions.
The 72 hour incident reporting timeline is the same as the cyber incident reporting timeline in the financial services industry, and only a short initial report is required.
AG enforcement + right to cure is effectively toothless, could lead to uneven enforcement, and seems like a bad idea given the high stakes of the issue.
A bill passed two chambers of New York State legislature. It incorporated a lot of feedback from this community. This bill’s author actually talked about it as a keynote speaker at an event organized by FAR at the end of May.
There’s no good theory of change for Anthropic compatible with them opposing and misrepresenting this bill. If you work at Anthropic on AI capabilities, you should stop.
From Jack Clark:
(Many such cases!)
Here’s what the bill’s author says in response: