I’m generally pretty receptive to “adjust the Overton window” arguments, which is why I think it’s good PauseAI exists, but I do think there’s a cost in political capital to saying “I want a Pause, but I am willing to negotiate”. It’s easy for your opponents to cite your public Pause support and then say, “look, they want to destroy America’s main technological advantage over its rivals” or “look, they want to bomb datacenters, they’re unserious”. (yes Pause as typically imagined requires international treaties, the attack lines would probably still work, there was tons of lying in the California SB 1047 fight and we lost in the end)
The political position AI safety has mostly taken instead on US regulation is “we just want some basic reporting and transparency” which is much harder to argue against, achievable, and still pretty valuable.
I can’t say I know for sure this is the right approach to public policy. There’s a reason politics is a dark art, there’s a lot of triangulating between “real” and “public” stances, and it’s not costless to compromise your dedication to the truth like that. But I think it’s part of why there isn’t as much support for PauseAI as you might expect. (the other main part being what 1a3orn says, that PauseAI is on the radical end of opinions in AI safety and it’s natural there’d be a gap between moderates and them)
Very briefly, the fact that “The political position AI safety has mostly taken” is a single stance is evidence that there’s no room for even other creative solutions, so we’ve failed hard at expanding that Overton window. And unless you are strongly confident in that as the only possibly useful strategy, that is a horribly bad position for the world to be in as AI continues to accelerate and likely eliminate other potential policy options.
I’m generally pretty receptive to “adjust the Overton window” arguments, which is why I think it’s good PauseAI exists, but I do think there’s a cost in political capital to saying “I want a Pause, but I am willing to negotiate”. It’s easy for your opponents to cite your public Pause support and then say, “look, they want to destroy America’s main technological advantage over its rivals” or “look, they want to bomb datacenters, they’re unserious”. (yes Pause as typically imagined requires international treaties, the attack lines would probably still work, there was tons of lying in the California SB 1047 fight and we lost in the end)
The political position AI safety has mostly taken instead on US regulation is “we just want some basic reporting and transparency” which is much harder to argue against, achievable, and still pretty valuable.
I can’t say I know for sure this is the right approach to public policy. There’s a reason politics is a dark art, there’s a lot of triangulating between “real” and “public” stances, and it’s not costless to compromise your dedication to the truth like that. But I think it’s part of why there isn’t as much support for PauseAI as you might expect. (the other main part being what 1a3orn says, that PauseAI is on the radical end of opinions in AI safety and it’s natural there’d be a gap between moderates and them)
Very briefly, the fact that “The political position AI safety has mostly taken” is a single stance is evidence that there’s no room for even other creative solutions, so we’ve failed hard at expanding that Overton window. And unless you are strongly confident in that as the only possibly useful strategy, that is a horribly bad position for the world to be in as AI continues to accelerate and likely eliminate other potential policy options.