Yes, that’s exactly right, we do. That’s what it means to be an ally rather than a friend. America allied with the Soviet Union in World War 2; this is no different. When someone earnestly offers to help you literally save the world, you hold your nose and shake their hand.
I wholeheartedly agree that it can be worth allying with groups that you don’t personally like. That said, I think there’s still hope that AI safety can avoid being a strongly partisan-coded issue. Some critical safety issues manage to stay nonpartisan for the long term — eg opposition to the use of chemical weapons and bioweapons is not very partisan-coded in the US (in general, at least; I’m sure certain aspects of it have been partisan-coded at one time or another).
So while I agree that it’s worth allying with partisan groups in some ways (eg when advocating for specific legislation), it seems important to consistently emphasize that this is an issue that transcends partisan politics, and that we’re just as happy to ally with AI-skeptical elements of the left (eg AI ethics folks) as we are with AI-skeptical elements of the right.
Of course, some individual people may be strongly partisan themselves and only care about building allyships with one side or the other. That’s fine! There’s no reason why the AI safety community needs to be monolithic on anything but the single issue we’re pushing for, that humanity needs to steer clear of catastrophic and existential outcomes from AI.
Ally on what issues exactly? What I’m getting from the article is they want anti-AI protectionism, consistent with their positions on immigration and trade. Good enough for Remmelt and the StopAI crowd, but I don’t expect anti-techs (of either the deep green or national conservative type) to support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).
support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
FWIW, I approximately don’t think any of those things matter compared to just not building AGI. Other people can disagree of course, but please do not count me as someone who thinks those things are of comparable importance!
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Part of the deal of being allies if you don’t have to be allies about everything. I don’t think they particularly need to do anything to help with technical safety (there just need to be people who understand and care about that somewhere). I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
I do think they eventually need to be on board with some version of the handling the intelligence curse (I didn’t know that term, here’s a link ), although I think in a lot of worlds the gameboard is so obviously changed I expect handling it to be an easier sell.
I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
Thank you for editing (sentence was cut short in earlier version). Reiterating what I said to @habryka with the same remark:
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
If we end up in a world with mass unemployment (like 90%), I expect those people currently self-identifying as conservatives to support strong redistribution of income, along with almost all others. I expect strong redistribution to happen in countries where democracy with income-independent voting rights is still alive by then, if any. In those where it’s not, maybe it won’t happen and people might die of starvation, be driven out of their homes, etc.
Do you believe mass unemployment will jump from ~0-10% in developed countries to 90% overnight? If not, the political question of whether to respond to unemployment increases by either redistribution or protectionism (of any kind – it likely won’t be immediately clear that AI and not other political grievances will be responsible) will be particularly salient in the short term.
Sorry, typo. I didn’t meant to make a connection between those two, it’s just that many developing countries have higher unemployment rates for reasons that are not really relevant to what we’re talking about here.
“Protectionism against AI” is a bit of an indirect way to point at not using AI for some tasks for job market reasons, but thanks for clarifying. Reducing immigration or trade won’t solve AI-induced job loss, right? I do agree that countries could decide to either not use AI, or redistribute AI-generated income, with the caveat that those choosing not to use AI may be outcompeted by those who do. I guess we could, theoretically, sign treaties to not use AI for some jobs anywhere.
I think AI-generated income redistribution is more likely though, since it seems the obviously better solution.
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
Anti- vs pro-tech is an outdated, needlessly primitive, and needlessly polarizing framework to look at the world. We should obviously consider which tech is net positive and build that, and which tech is net negative and regulate that at the point where it starts being so.
I think anti-tech v. pro-tech is in fact going to be more important a political axis orthogonal to the left-right axis as time goes on (and OP seems like clear evidence for that?), and the position you suggest is just ‘centrism’ on that axis. See fallacy of gray.
How would you define pro-tech, which I assume you identify as? For example, should AI replace humanity a) in any case if it can, b) only if it’s conscious, c) not at all?
Consider an axis where on one end you’ve got Shock Level Four and on an opposite end you’ve got John Zerzan. Anything in between is some gradation of gray where you accept some proportion p of all available technology.
Scifi was probably fun to think about for some in the 90s but things got more serious when it became clear the singularity could kill everyone we love. Yud bit the bullet and now says we should stop AI before it kills us. Did you bite that bullet too? If so, you’re not purely pro-tech anymore whether you like it or not. (Which I think shouldn’t matter because pro- and anti-tech has always been a silly way to look at the world.)
I think this is a silly argument, comparable to saying if you don’t want to bit the bullet of Esoteric Hitlerism you aren’t a true right-winger, or if you don’t want to bit the bullet of Posadism you aren’t a true left-winger. Yud as of right now believe we should have research intelligence augmentation technology to have supercharged AI safety researchers build Friendly AI right?
Yes, that’s exactly right, we do. That’s what it means to be an ally rather than a friend. America allied with the Soviet Union in World War 2; this is no different. When someone earnestly offers to help you literally save the world, you hold your nose and shake their hand.
I wholeheartedly agree that it can be worth allying with groups that you don’t personally like. That said, I think there’s still hope that AI safety can avoid being a strongly partisan-coded issue. Some critical safety issues manage to stay nonpartisan for the long term — eg opposition to the use of chemical weapons and bioweapons is not very partisan-coded in the US (in general, at least; I’m sure certain aspects of it have been partisan-coded at one time or another).
So while I agree that it’s worth allying with partisan groups in some ways (eg when advocating for specific legislation), it seems important to consistently emphasize that this is an issue that transcends partisan politics, and that we’re just as happy to ally with AI-skeptical elements of the left (eg AI ethics folks) as we are with AI-skeptical elements of the right.
Of course, some individual people may be strongly partisan themselves and only care about building allyships with one side or the other. That’s fine! There’s no reason why the AI safety community needs to be monolithic on anything but the single issue we’re pushing for, that humanity needs to steer clear of catastrophic and existential outcomes from AI.
Agreed; well said.
Ally on what issues exactly? What I’m getting from the article is they want anti-AI protectionism, consistent with their positions on immigration and trade. Good enough for Remmelt and the StopAI crowd, but I don’t expect anti-techs (of either the deep green or national conservative type) to support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).
FWIW, I approximately don’t think any of those things matter compared to just not building AGI. Other people can disagree of course, but please do not count me as someone who thinks those things are of comparable importance!
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Part of the deal of being allies if you don’t have to be allies about everything. I don’t think they particularly need to do anything to help with technical safety (there just need to be people who understand and care about that somewhere). I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
I do think they eventually need to be on board with some version of the handling the intelligence curse (I didn’t know that term, here’s a link ), although I think in a lot of worlds the gameboard is so obviously changed I expect handling it to be an easier sell.
Thank you for editing (sentence was cut short in earlier version). Reiterating what I said to @habryka with the same remark:
Can you explain “defensive technologies”?
Do any of these defensive technologies allow people to survive an unaligned AI that they wouldn’t have survived without the defensive technology?
automated AI safety research, biosecurity, cybersecurity (including AI control), possibly traditional transhumanism (brain-computer interfaces, intelligence augmentation, whole brain emulation)
If we end up in a world with mass unemployment (like 90%), I expect those people currently self-identifying as conservatives to support strong redistribution of income, along with almost all others. I expect strong redistribution to happen in countries where democracy with income-independent voting rights is still alive by then, if any. In those where it’s not, maybe it won’t happen and people might die of starvation, be driven out of their homes, etc.
Do you believe mass unemployment will jump from ~0-10% in developed countries to 90% overnight? If not, the political question of whether to respond to unemployment increases by either redistribution or protectionism (of any kind – it likely won’t be immediately clear that AI and not other political grievances will be responsible) will be particularly salient in the short term.
I don’t really understand your thoughts about developing vs developed countries and protectionism, could you make them more explicit?
Sorry, typo. I didn’t meant to make a connection between those two, it’s just that many developing countries have higher unemployment rates for reasons that are not really relevant to what we’re talking about here.
Thanks for correcting it. I still don’t really get your connection between protectionism and mass unemployment. Perhaps you could make it explicit?
? Protectionism (whether against AI, or immigration, or trade) is often justified by concerns about job loss.
“Protectionism against AI” is a bit of an indirect way to point at not using AI for some tasks for job market reasons, but thanks for clarifying. Reducing immigration or trade won’t solve AI-induced job loss, right? I do agree that countries could decide to either not use AI, or redistribute AI-generated income, with the caveat that those choosing not to use AI may be outcompeted by those who do. I guess we could, theoretically, sign treaties to not use AI for some jobs anywhere.
I think AI-generated income redistribution is more likely though, since it seems the obviously better solution.
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
Anti- vs pro-tech is an outdated, needlessly primitive, and needlessly polarizing framework to look at the world. We should obviously consider which tech is net positive and build that, and which tech is net negative and regulate that at the point where it starts being so.
I think anti-tech v. pro-tech is in fact going to be more important a political axis orthogonal to the left-right axis as time goes on (and OP seems like clear evidence for that?), and the position you suggest is just ‘centrism’ on that axis. See fallacy of gray.
How would you define pro-tech, which I assume you identify as? For example, should AI replace humanity a) in any case if it can, b) only if it’s conscious, c) not at all?
Consider an axis where on one end you’ve got Shock Level Four and on an opposite end you’ve got John Zerzan. Anything in between is some gradation of gray where you accept some proportion p of all available technology.
Scifi was probably fun to think about for some in the 90s but things got more serious when it became clear the singularity could kill everyone we love. Yud bit the bullet and now says we should stop AI before it kills us. Did you bite that bullet too? If so, you’re not purely pro-tech anymore whether you like it or not. (Which I think shouldn’t matter because pro- and anti-tech has always been a silly way to look at the world.)
I think this is a silly argument, comparable to saying if you don’t want to bit the bullet of Esoteric Hitlerism you aren’t a true right-winger, or if you don’t want to bit the bullet of Posadism you aren’t a true left-winger. Yud as of right now believe we should have research intelligence augmentation technology to have supercharged AI safety researchers build Friendly AI right?
I agree. The “jesus” was halfway a joke about the religious ties. And halfway steeling myself for that handshake.