IANAL but I believe it would be legal for OpenAI (which is a nonprofit) and Anthropic (which is a public benefit corporation).
I don’t think it is known whether it would be legal for a for-profit because there’s no precedent.
“Sorry, we wish we could’ve not killed everyone, but we had to uphold our fiduciary duty” is a really weak defense.
There are some kinds of lawbreaking that I would not endorse (e.g. violence) but I have no hard line against violating fiduciary duty.
(In general, you should abide by your fiduciary duty, but there are some ethical prescriptions that weigh more heavily, and I think this is widely recognized in many contexts. For example whistleblowers who expose companies’ unethical behavior are violating their fiduciary duty because exposing bad behavior reduces profit.)
There are a few orgs doing things like this:
AI Lab Watch rates AI companies on their safety procedures along various dimensions.
AI Safety Claims Analysis critically reviews AI companies’ safety claims.
The Midas Project hosts several websites documenting AI companies’ behavior; I think the most relevant one is Seoul Tracker which tracks how well AI companies are living up to their commitments at the Seoul summit.
SaferAI gives AI companies risk management ratings.
The first two of these are solo projects by Zach Stein-Perlman.