If your choice of arguments for AI safety or your way of presenting yourself feels too closely associated with one political party, then you might create or strengthen an association between AI safety and that party, which could lead the other party to reject AI safety on partisan grounds.
This already happened to an extent, and it wasn’t because the advocates had trouble suppressing their partisan instincts. They focused on technocratic solutions and advocated them to the people in power at the time—the Biden administration. It sort of worked. Then the administration implemented some of them (or started to lay the groundwork for implementing them) and that alone was enough to create lasting Republican opposition to something that was previously pretty neutral.
Is Republican opposition to AI Regulation “lasting”? I notice there’s some movement this week on AI Whistleblower Protection from Congressional Republicans.
My read of the situation is that Republicans largely oppose whatever Biden supported, but weakly. That opposition can and has been overcome in several policy areas. Trump repealed the Biden AI Executive Order, and JD Vance gave a critical speech at the Paris Conference, but I wouldn’t assume their opposition is lasting or categorical.
I’d be very interested to read a post on this, though! If you have more thoughts on how sticky their opposition is, please write them up.
I tend to agree with John—I think the Republican opposition is mostly a mixture of general hostility toward regulation as a tool and a desire to publicly reject the Biden Administration and all its works. My experience in talking with Republican staffers, even in 2025, is that they weren’t hostile to the idea of reducing catastrophic risks from AI; they’re just waiting to see what the Trump Administration’s official position is on the issue. We’ll know more about that when we see the OSTP AI Action Plan in July.
Part of the problem with AI safety advocacy in 2022 − 2024 was that it ignored my advice to “insist that the bills you endorse have co-sponsors from both parties.” By definition, an executive order is not bipartisan. You can frame the Biden EO as general technocracy if you like, but it’s still careless to push for an EO that’s only supported by one party. If you want to avoid alienating Republicans (and you should want to avoid doing that), then you need to make sure you have at least some Republicans willing to go on record as publicly endorsing your policy proposals before you enact them.
I think there was (and is) a common belief that Congress won’t do anything significant on it anytime soon, which makes executive action appealing if you think time is running out. If what you’re suggesting here is more like a variant of the “wait for a crisis” strategy—get the legislation ready, talk to people about it, and then when Congress is ready to act, they can reach for it—I’m relatively optimistic about that. As long as there’s time.
Yes, CAIP’s strategy was primarily to get the legislation ready, talk to people about it, and prepare for a crisis. We also encouraged people to pass our legislation immediately, but I was not especially optimistic about the odds that they would agree to do so.
I don’t object to people pushing legislators from both parties to act more quickly...but you have to honor their decision if they say “no,” no matter how frustrating that is or how worried you are about the near-term future, because trying to do an end-run around their authority will quickly and predictably backfire.
In my opinion, going behind legislators’ backs to the Biden administration was particularly unhelpful for the Biden AI EO, because the contents of that EO would have led to only a small reduction in catastrophic risk—it would be nice to require reports on the results of red-teaming, but the EO by itself wouldn’t have stopped companies from reporting that their models seemed risky and then releasing them anyway. We would have needed to follow up on the EO and enact additional policies in order to have a reasonable chance of survival, but proceeding via unilateral executive action had some tendency to undermine our ability to get those additional policies passed, so it’s not clear to me what the overall theory of change was for rushing forward with an EO.
You may be right about the EO. At the time I felt it was a good thing, because it raised the visibility of safety evaluations at the labs and brought regulation of training, as well as deployment, more into the Overton window. Even without follow-up rules, I think it can be the case that getting a company to report the bad things strongly incentivizes it to reduce the bad things.
I could carry on debating the pros and cons of the EO with you, but I think my real point is that bipartisan advocacy is harmless. You shouldn’t worry that bipartisan advocacy will backfire, so we can’t justify engaging in no advocacy at all out of fear that advocacy might backfire.
If you believe strongly enough in the merits of working with one party to be confident that it won’t backfire, fine, I won’t stop you—but we should all be able to agree that more bipartisan advocacy would be good, even if we disagree about how valuable one-party advocacy is.
This already happened to an extent, and it wasn’t because the advocates had trouble suppressing their partisan instincts. They focused on technocratic solutions and advocated them to the people in power at the time—the Biden administration. It sort of worked. Then the administration implemented some of them (or started to lay the groundwork for implementing them) and that alone was enough to create lasting Republican opposition to something that was previously pretty neutral.
Is Republican opposition to AI Regulation “lasting”? I notice there’s some movement this week on AI Whistleblower Protection from Congressional Republicans.
My read of the situation is that Republicans largely oppose whatever Biden supported, but weakly. That opposition can and has been overcome in several policy areas. Trump repealed the Biden AI Executive Order, and JD Vance gave a critical speech at the Paris Conference, but I wouldn’t assume their opposition is lasting or categorical.
I’d be very interested to read a post on this, though! If you have more thoughts on how sticky their opposition is, please write them up.
Well, I certainly hope you’re right, and it remains to be seen. I don’t think I have any special insights.
I tend to agree with John—I think the Republican opposition is mostly a mixture of general hostility toward regulation as a tool and a desire to publicly reject the Biden Administration and all its works. My experience in talking with Republican staffers, even in 2025, is that they weren’t hostile to the idea of reducing catastrophic risks from AI; they’re just waiting to see what the Trump Administration’s official position is on the issue. We’ll know more about that when we see the OSTP AI Action Plan in July.
Part of the problem with AI safety advocacy in 2022 − 2024 was that it ignored my advice to “insist that the bills you endorse have co-sponsors from both parties.” By definition, an executive order is not bipartisan. You can frame the Biden EO as general technocracy if you like, but it’s still careless to push for an EO that’s only supported by one party. If you want to avoid alienating Republicans (and you should want to avoid doing that), then you need to make sure you have at least some Republicans willing to go on record as publicly endorsing your policy proposals before you enact them.
I think there was (and is) a common belief that Congress won’t do anything significant on it anytime soon, which makes executive action appealing if you think time is running out. If what you’re suggesting here is more like a variant of the “wait for a crisis” strategy—get the legislation ready, talk to people about it, and then when Congress is ready to act, they can reach for it—I’m relatively optimistic about that. As long as there’s time.
Yes, CAIP’s strategy was primarily to get the legislation ready, talk to people about it, and prepare for a crisis. We also encouraged people to pass our legislation immediately, but I was not especially optimistic about the odds that they would agree to do so.
I don’t object to people pushing legislators from both parties to act more quickly...but you have to honor their decision if they say “no,” no matter how frustrating that is or how worried you are about the near-term future, because trying to do an end-run around their authority will quickly and predictably backfire.
In my opinion, going behind legislators’ backs to the Biden administration was particularly unhelpful for the Biden AI EO, because the contents of that EO would have led to only a small reduction in catastrophic risk—it would be nice to require reports on the results of red-teaming, but the EO by itself wouldn’t have stopped companies from reporting that their models seemed risky and then releasing them anyway. We would have needed to follow up on the EO and enact additional policies in order to have a reasonable chance of survival, but proceeding via unilateral executive action had some tendency to undermine our ability to get those additional policies passed, so it’s not clear to me what the overall theory of change was for rushing forward with an EO.
You may be right about the EO. At the time I felt it was a good thing, because it raised the visibility of safety evaluations at the labs and brought regulation of training, as well as deployment, more into the Overton window. Even without follow-up rules, I think it can be the case that getting a company to report the bad things strongly incentivizes it to reduce the bad things.
I could carry on debating the pros and cons of the EO with you, but I think my real point is that bipartisan advocacy is harmless. You shouldn’t worry that bipartisan advocacy will backfire, so we can’t justify engaging in no advocacy at all out of fear that advocacy might backfire.
If you believe strongly enough in the merits of working with one party to be confident that it won’t backfire, fine, I won’t stop you—but we should all be able to agree that more bipartisan advocacy would be good, even if we disagree about how valuable one-party advocacy is.