What are the theories of change behind SB53 and the RAISE act? What else do they need to see those theories through?
These laws both aim to regulate frontier AI, and their success has been used to argue for support for their respective political owners (Scott Weiner and Alex Bores[1]). But neither law is going to do much on its own. That’s not necessarily a problem- starting small and working your way up is a classic political strategy. But I’d like to understand if these laws are essentially symbolic and coalitional, or good on the margin (the way changing health insurance rules got real gay people health insurance that paid for real health care, which would be beneficial even if it didn’t advance gay marriage an iota), or groundwork laying (the way a law against murder is not useful without cops or prisons, but is still a necessary component).
which isn’t necessarily wrong even if the bills are themselves useless- maybe the best way to get optimal AI legislation is to support people who support any AI legislation, at least for now.
I’m not 100% sure what the bill authors exactly intended, but, I’d previously written up why I think the bills make sense in What is SB 1047 *for*?.
tl;d;r – it establishes a fairly reasonable operationalization of how the government should think about regulating frontier AI. (it both establishes “frontier AI with large compute runs” as a thing to regulate, and “catastrophic risk” as a particular thing to be trying to solve with those regulations).
The way this pays off later is if we successfully build good 3rd party auditing that evaluates “is an AI potentially dangerous?”, “is an AI deceptive/scheming?” and “is an AI demonstrably safe?”, which enable more concrete regulation with more teeth.
What are the theories of change behind SB53 and the RAISE act? What else do they need to see those theories through?
These laws both aim to regulate frontier AI, and their success has been used to argue for support for their respective political owners (Scott Weiner and Alex Bores[1]). But neither law is going to do much on its own. That’s not necessarily a problem- starting small and working your way up is a classic political strategy. But I’d like to understand if these laws are essentially symbolic and coalitional, or good on the margin (the way changing health insurance rules got real gay people health insurance that paid for real health care, which would be beneficial even if it didn’t advance gay marriage an iota), or groundwork laying (the way a law against murder is not useful without cops or prisons, but is still a necessary component).
which isn’t necessarily wrong even if the bills are themselves useless- maybe the best way to get optimal AI legislation is to support people who support any AI legislation, at least for now.
I’m not 100% sure what the bill authors exactly intended, but, I’d previously written up why I think the bills make sense in What is SB 1047 *for*?.
tl;d;r – it establishes a fairly reasonable operationalization of how the government should think about regulating frontier AI. (it both establishes “frontier AI with large compute runs” as a thing to regulate, and “catastrophic risk” as a particular thing to be trying to solve with those regulations).
The way this pays off later is if we successfully build good 3rd party auditing that evaluates “is an AI potentially dangerous?”, “is an AI deceptive/scheming?” and “is an AI demonstrably safe?”, which enable more concrete regulation with more teeth.