Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).
Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).