Yes, that’s exactly right, we do. That’s what it means to be an ally rather than a friend. America allied with the Soviet Union in World War 2; this is no different. When someone earnestly offers to help you literally save the world, you hold your nose and shake their hand.
I wholeheartedly agree that it can be worth allying with groups that you don’t personally like. That said, I think there’s still hope that AI safety can avoid being a strongly partisan-coded issue. Some critical safety issues manage to stay nonpartisan for the long term — eg opposition to the use of chemical weapons and bioweapons is not very partisan-coded in the US (in general, at least; I’m sure certain aspects of it have been partisan-coded at one time or another).
So while I agree that it’s worth allying with partisan groups in some ways (eg when advocating for specific legislation), it seems important to consistently emphasize that this is an issue that transcends partisan politics, and that we’re just as happy to ally with AI-skeptical elements of the left (eg AI ethics folks) as we are with AI-skeptical elements of the right.
Of course, some individual people may be strongly partisan themselves and only care about building allyships with one side or the other. That’s fine! There’s no reason why the AI safety community needs to be monolithic on anything but the single issue we’re pushing for, that humanity needs to steer clear of catastrophic and existential outcomes from AI.
Ally on what issues exactly? What I’m getting from the article is they want anti-AI protectionism, consistent with their positions on immigration and trade. Good enough for Remmelt and the StopAI crowd, but I don’t expect anti-techs (of either the deep green or national conservative type) to support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).
support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
FWIW, I approximately don’t think any of those things matter compared to just not building AGI. Other people can disagree of course, but please do not count me as someone who thinks those things are of comparable importance!
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Part of the deal of being allies if you don’t have to be allies about everything. I don’t think they particularly need to do anything to help with technical safety (there just need to be people who understand and care about that somewhere). I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
I do think they eventually need to be on board with some version of the handling the intelligence curse (I didn’t know that term, here’s a link ), although I think in a lot of worlds the gameboard is so obviously changed I expect handling it to be an easier sell.
Part of the deal of being allies if you don’t have to be allies about everything.
That’s true, but for the alliance to be functional you need at least a moderate trust in the fact that whatever your shared interests are, they will persist for long enough that the other party won’t stab you in the back right away (until your entire goal is to just gain as much time as you can before the inevitable betrayal). I think most of these reasons are shallow or misaligned enough, and the crowd in question has shown itself so fickle and prone to simply valuing blind loyalty to its leader over any other overarching values, that an alliance isn’t worth what it costs.
I would be more trusting in general of an alliance with e.g. various religious leaders and factions. Even for all our differences, I would think that they would genuinely think about human dignity as a core principle. But a lot of these people quoted just sound annoyed about the fact that the LLMs aren’t siding with them on the culture war, and would sing another tune if they did.
I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
Thank you for editing (sentence was cut short in earlier version). Reiterating what I said to @habryka with the same remark:
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Anti- vs pro-tech is an outdated, needlessly primitive, and needlessly polarizing framework to look at the world. We should obviously consider which tech is net positive and build that, and which tech is net negative and regulate that at the point where it starts being so.
I think anti-tech v. pro-tech is in fact going to be more important a political axis orthogonal to the left-right axis as time goes on (and OP seems like clear evidence for that?), and the position you suggest is just ‘centrism’ on that axis. See fallacy of gray.
How would you define pro-tech, which I assume you identify as? For example, should AI replace humanity a) in any case if it can, b) only if it’s conscious, c) not at all?
Consider an axis where on one end you’ve got Shock Level Four and on an opposite end you’ve got John Zerzan. Anything in between is some gradation of gray where you accept some proportion p of all available technology.
Scifi was probably fun to think about for some in the 90s but things got more serious when it became clear the singularity could kill everyone we love. Yud bit the bullet and now says we should stop AI before it kills us. Did you bite that bullet too? If so, you’re not purely pro-tech anymore whether you like it or not. (Which I think shouldn’t matter because pro- and anti-tech has always been a silly way to look at the world.)
I think this is a silly argument, comparable to saying if you don’t want to bit the bullet of Esoteric Hitlerism you aren’t a true right-winger, or if you don’t want to bit the bullet of Posadism you aren’t a true left-winger. Yud as of right now believe we should have research intelligence augmentation technology to have supercharged AI safety researchers build Friendly AI right?
If we end up in a world with mass unemployment (like 90%), I expect those people currently self-identifying as conservatives to support strong redistribution of income, along with almost all others. I expect strong redistribution to happen in countries where democracy with income-independent voting rights is still alive by then, if any. In those where it’s not, maybe it won’t happen and people might die of starvation, be driven out of their homes, etc.
If we end up in a world with mass unemployment (like 90%), to be really blunt, I don’t expect the opinions of the unemployed to count for shit. What are they going to do? Strike with the jobs they don’t have, or violently riot against the newly minted drone armies of unyielding loyalty?
Do you believe mass unemployment will jump from ~0-10% in developed countries to 90% overnight? If not, the political question of whether to respond to unemployment increases by either redistribution or protectionism (of any kind – it likely won’t be immediately clear that AI and not other political grievances will be responsible) will be particularly salient in the short term.
Sorry, typo. I didn’t meant to make a connection between those two, it’s just that many developing countries have higher unemployment rates for reasons that are not really relevant to what we’re talking about here.
“Protectionism against AI” is a bit of an indirect way to point at not using AI for some tasks for job market reasons, but thanks for clarifying. Reducing immigration or trade won’t solve AI-induced job loss, right? I do agree that countries could decide to either not use AI, or redistribute AI-generated income, with the caveat that those choosing not to use AI may be outcompeted by those who do. I guess we could, theoretically, sign treaties to not use AI for some jobs anywhere.
I think AI-generated income redistribution is more likely though, since it seems the obviously better solution.
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
I like some of what Geoffrey does, but I do think at various points he has violated enough norms of reasonable discourse (especially on Twitter) that I wouldn’t consider him in “good standing”.
I agree the standards are quite different! Nevertheless, I do currently think you are being overly aggressive even by the standards I would have for the broader rationality community for what appropriate norms are for Twitter.
I mean, my comment was written before at least I had heard any news of that, so I don’t really see its relevance to the conversation.
Also, I really don’t see the relevance of bringing in Charlie Kirk into this conversation at all. Like, if you want we can have a real conversation about whether marginally more aggressive comments on social media were partially response for it or not (seems plausible to me but I haven’t thought much about it), but I am not even sure what you mean by “this is not the day for that”, and it certainly isn’t related to really anything else in this comment section.
You accused me of being ‘overly aggressive’. I was pointing out that tweets aren’t acts of aggression. Shooting people in the neck is.
As far as I can remember, I’ve never called for violence, on any topic, in any of the 80,000 posts I’ve shared on Twitter/X, to my 150,000 followers. So, I think your claim that my posts are ‘overly aggressive’ is poorly calibrated in relation to what actual aggression looks like.
That’s the relevance of the assassination of Charlie Kirk. A reminder that in this LessWrong bubble of ever-so-cautious, ever-so-rational, ever-so-epistemically-pure discourse, people can get very disconnected from the reality of high-stakes political debate and ideologically-driven terrorism.
Of course words can be “aggressive”. Yes, they are a different form of aggression from literal physical violence, but we still have norms for words. Some tweets are obviously “acts of aggression”.
(Unless you mean to import some technical meaning with those words, in which case I am happy to clarify that I am not meaning to import any technical meaning behind “aggression” and just mean the obvious everyday usage of the word)
Regarding “calling for violence”: I can’t find any specific example scrolling through your past tweets, so it’s plausible I am wrong about this! I do think I remember some, but as you say yourself, you have >80,000 tweets and I don’t know of an efficient way to search through all of them. I apologize if it turns out to be wrong, I did not mean to imply a high level of confidence in that specific adjective. There are some tweets that I feel like someone could argue are calls for violence, though I don’t think any of the ones I’ve found with 5 minutes of searching obviously cross that line.
habryka—regarding what ‘aggression’ is, I’m coming to this from the perspective of having taught courses on animal behavior and human evolution for 35 years.
When biological scientists speak of ‘aggression’, we are referring to actual physical violence, e.g. hitting, biting, dismembering, killing, eating, within or between species. We are not referring to vocalizations, or animal signals, or their modern digital equivalents.
When modern partisan humans refer to ‘aggression’ metaphorically, this collapses the distinction between speech and violence. Which is, of course, what censors want, in order to portray speech that they don’t like as if it’s aggravated assault. This has become a standard chant on the Left: ‘speech = violence’.
I strongly disagree with that framing, because it is almost always an excuse for censorship, deplatforming, and ostracizing of political rivals.
I think to maintain the epistemic norms of the Rationality community, we must be very careful not to equate ‘verbal signals we don’t like’ with ‘acts of aggression’.
When biological scientists speak of ‘aggression’, we are referring to actual physical violence, e.g. hitting, biting, dismembering, killing, eating, within or between species. We are not referring to vocalizations, or animal signals, or their modern digital equivalents.
No, the usual term both in common usage and that biological scientists use for that kind of stuff is “violence”. Aggression very much includes speech. I would be surprised if you were to find biologists consistently avoiding the word aggression when e.g. referring to intimidation behavior between animals in lieu of actual physical contact.
Indeed, just the first Google result for “animal aggression behavior” looks like this:
This also aligns with the common usage of those words.
That said, I am very happy to use a different word for the context of this comment thread if you want. We don’t have to agree on the meanings of all words to have a conversation here.
It’s not a norm of discourse that one cannot state that a position is absurd. And it is a virtue of discourse to show up and argue for one’s stances, as Habryka does throughout that thread!
It’s not a norm of discourse that one cannot state that a position is absurd.
Speaking as someone who makes very little effort to avoid honey consumption, my opinion of Habryka would have dropped much less if he’d said something like: “Sorry, this position is just intuitively absurd to me, and I’m happy to reject it on that basis.” So I don’t think the issue has to do with absurdity per se.
Your ideas about reasonable discourse can be different from mine, and Ben West’s, and Hacker News’. That’s OK. I was just sharing my opinion.
It’s been a while since I read that discussion. I remember my estimation of Habryka dropped dramatically when I read it. Maybe I can try to reconstruct why in more detail if you want. But contrasting what Habryka wrote with the HN commenting guidelines seems like a reasonable starting point.
And it is a virtue of discourse to show up and argue for one’s stances, as Habryka does throughout that thread!
You’ll notice that Habryka doesn’t provide any concrete example of Geoffrey violating a norm of reasonable discourse in this thread. I did provide a concrete example.
Is it possible that invokation of such “norms” can be a mere figleaf for drawing ingroup/outgroup boundaries in the traditional tribalistic way?
Is it too much to ask that Dear Leadership is held to the same standards, and treated the same way, as everyone else is?
It’s fine for you to have takes about my commenting style on those threads. I do continue to think that post was really quite bad (where, to be clear, most of my objection is to the author somehow taking the result at face value, while feeling no need to caveat or justify yourself, and linking to the Rethink Priorities report as an authoritative source. I don’t have the same objection to e.g. this old post by Luke which raised this as a hypothesis but doesn’t make the same errors).
But even if you think I am really mistaken here, I don’t think there is almost any standard that would make sense to defend on LessWrong that Geoffrey doesn’t routinely violate.
Some quick examples:
Cenk, you’re just upset that in fifty years, nobody will remember an antisemitic hack like you, but there will still be an Israel, standing proud.
Behind your pretty-boy mask , you’re a sociopathic ghoul. Glad that Americans are learning the truth about the deep, dark, bitter pit where your soul should be.
Keir is no different from Ephialtes—the resentful, deformed Spartan who betrayed his homeland to help the Persians invade, hoping they would reward him with the honor and validation his own people would never give him.
This goes on and on and on and on. He has hundreds of tweets like this, with insults, calls for violence, extreme aggression, sneering dismissal, the whole rickamarole.
Again, I appreciate a bunch of his work, but I really don’t think that even by your lights we treat discourse anything close to the same.
He has hundreds of tweets like this, with insults, calls for violence, extreme aggression, sneering dismissal, the whole rickamarole.
I think some of these are shown in what you link, but ‘calls for violence’ I have not seen. I just searched for it a little, and mostly found him speaking against that.
Everybody’s demanding that everybody else disavow the use of violence.
If you’re a libertarian like me, you already believe in the ‘Non-Aggression Principle’: you never initiate the use of force against anyone.
Might be nice if other political groups adopted the NAP....
If you have to resort violence, intimidation, and censorship, you don’t _really_ have any confidence that your ideas are epistemically or ethically compelling.
I also found him to be consistently annoyed about people blurring the line between aggressive speech and physical violence. Here’s one example.
PS one reason I think it’s important to maintain a crisp distinction between persuasion and coercion is that free speech rights are being eroded by creating a grey area between them, e.g. rhetoric that ‘speech is violence’ or ‘words cause trauma’ rhetoric.
This feels to me like it’s doing a bunch of blurring between aggressive speech and physical violence.
(It’s also IMO a particularly weird stance to have, because he is clearly calling AI labs an existential threat to democracy, as they are an existential threat to all human things, which by the same logic would be an incitement to violence, though I think they aren’t, but like maybe by Geoffrey’s own logic they are?)
Just to clarify here, I have no issue with you thinking the post is bad. That seems besides the point to me. My issue is with you doing much of what you accuse Miller of doing.
Insults: “This post seems completely insane to me, as do people who unquestionable retweet it.”
Aggression: “I cannot believe I have to argue for this… [cursing]...”
Sneering: “Has anyone who liked this actually read this post? How on earth is this convincing to anyone?”
Note also that the discussion around the Bentham post was previously calm and friendly. You walked in and dramatically worsened the discourse quality. By contrast, Geoffrey engages on hot-button political topics where discussion is already very heated.
As a quick and relatively objective measure, with a quick search, out of all 80K Geoffrey Miller tweets, I was only able to find one non-quoted f-bomb (“Fuck the Singularity.”).
As a matter of simple intellectual honesty, it would be nice if you could acknowledge that you engage in insults and aggressive behavior on Twitter. You might be doing it less than Geoffrey does. You might express it in a different way. But it’s just a question of degree, as far as I can tell. I really don’t think you have much moral high ground here.
You also have far fewer tweets than Geoffrey does (factor of ~16 difference). So it’s not just that you’ve dropped more f-bombs than him; your density of f-bombs appears to be far higher.
I… again am happy to accept critique of my posting, but I think you are really weirdly off-base here. Feel free to ask some neutral third-party to do an evaluation of our commenting or tweeting styles and how they compare to local norms of discourse.
In-particular, who cares about using words like “fuck”? What does this have to do with anything? Saying “fuck them” is much less aggressive or bad than saying “Behind your pretty-boy mask, you’re a sociopathic ghoul. Glad that Americans are learning the truth about the deep, dark, bitter pit where your soul should be.”!
I have certainly said the former to friends or acquaintances many times and received it many times. If you ever hear me or anyone else say the latter (or anything like it) earnestly to you, I think something is seriously going wrong.
Saying ‘fuck them’ when people are shifting to taking actions that threaten society is expressing something that should be expressed, in my view.
I see Oliver replied that in response to two Epoch researchers leaving to found an AI start-up focussed on improving capabilities. I interpret it as ‘this is bad, dismiss those people’. It’s not polite though maybe for others who don’t usually swear, it comes across much stronger?
To me, if someone posts an intense-feeling negatively worded text in response to what other people are doing, it usually signals that there is something they care about that they perceive to be threatened. I’ve found it productive to try relate with that first, before responding. Jumping to enforcing general rules stipulated somewhere in the community, and then implying that the person not following those rules is not harmonious with or does not belong to the community, can get counterproductive.
(Note I’m not tracking much of what Oliver and Geoffrey have said here and on twitter. Just wanted to respond to this part.)
To me, if someone posts an intense-feeling negatively worded text in response to what other people are doing, it usually signals that there is something they care about that they perceive to be threatened. I’ve found it productive to try relate with that first, before responding. Jumping to enforcing general rules stipulated somewhere in the community, and then implying that the person not following those rules is not harmonious with or does not belong to the community, can get counterproductive.
I’m a bit concerned about a situation where “insiders” always get this sort of contextual benefit-of-the-doubt, and “outsiders” don’t.
Agreed on tracking that hypothesis. It makes sense that people are more open to consider what’s said by an insider they look up to or know. In a few discussions I saw, this seemed a likely explanation.
Also, insiders tend to say more stuff that is already agreed on and understandable by others in the community.
Here there seems to be another factor:
Whether the person is expressing negative views that appear to support v.s. to be dissonant with core premises. With ‘core premises’, I mean beliefs about the world that much thinking shared in the community is based on, or tacitly relies on to be true.
In my experience (yours might be different), when making an argument that reaches a conclusion that contradicts a core premise in the community, I had to be painstakingly careful to be polite, route around understandable misinterpretations, and already address common objections. To be able to get to a conversation where the argument was explored somewhat openly.
It’s hard to have productive conversations that way. The person arguing against the ‘core premise’ bears by far the most cost trying to write out responses in a way that might be insightful for others (instead of dismissed too quickly). The time and strain this takes is mostly hidden to others.
Keep in mind that US conservatives are liable to be reading this thread, trying to determine whether they want to ally with a group such as yourselves. Conservatives have much more leverage to dictate alliance terms than you do. Note the alliance with the AI art people was apparently already wrecked. Something you might ask yourselves: If you can’t make nice with a guy like me, who shares more of your ideals than either artists or US conservatives do, how do you expect to make nice with US conservatives?
Reiterating what I said above that “conservatives” should be taboo’d here. It appears to me that this faction is flashy but do not have enough political capital or leverage to decide Republican policy relative to the tech right and neocons, and could only serve as tie-breaker in issues where Ds and (other) Rs disagree (e.g. antitrust policy). On the flip side, it’s worthwhile talking about how to interact with anti-techs whether they are left-coded (deep greens), right-coded (national conservatives), or whatever the anti-AI artists are.
Sorry, this position is just intuitively absurd to me, and I’m happy to reject it on that basis
To be clear, this is not my position! I am not an intuitionist or anything close to it. This position also seems absurd to me after thinking hard about moral philosophy, and as someone who is pretty sympathetic to general positions that morality can be quite counterintuitive and weird. Please do not summarize my position as arguing primarily from intuition!
Do we need to ally with these people? Jesus.
Yes, that’s exactly right, we do. That’s what it means to be an ally rather than a friend. America allied with the Soviet Union in World War 2; this is no different. When someone earnestly offers to help you literally save the world, you hold your nose and shake their hand.
I wholeheartedly agree that it can be worth allying with groups that you don’t personally like. That said, I think there’s still hope that AI safety can avoid being a strongly partisan-coded issue. Some critical safety issues manage to stay nonpartisan for the long term — eg opposition to the use of chemical weapons and bioweapons is not very partisan-coded in the US (in general, at least; I’m sure certain aspects of it have been partisan-coded at one time or another).
So while I agree that it’s worth allying with partisan groups in some ways (eg when advocating for specific legislation), it seems important to consistently emphasize that this is an issue that transcends partisan politics, and that we’re just as happy to ally with AI-skeptical elements of the left (eg AI ethics folks) as we are with AI-skeptical elements of the right.
Of course, some individual people may be strongly partisan themselves and only care about building allyships with one side or the other. That’s fine! There’s no reason why the AI safety community needs to be monolithic on anything but the single issue we’re pushing for, that humanity needs to steer clear of catastrophic and existential outcomes from AI.
Agreed; well said.
Ally on what issues exactly? What I’m getting from the article is they want anti-AI protectionism, consistent with their positions on immigration and trade. Good enough for Remmelt and the StopAI crowd, but I don’t expect anti-techs (of either the deep green or national conservative type) to support technical safety, global priorities research, AI welfare, AI x Animals, acceleration of defensive technologies, or governance to counter the intelligence curse (indeed Miller fearmongers about “UBI-based communism”!).
Maybe you’re reading some other motivations into them, but if we just list the concerns in the article only 2 out of 11 indicate they want protectionism. The rest of the items that apply to AI include threats to conservative Christian values, threats to other conservative policies, and things we can mostly agree on. This gives a lot to ally on, especially the idea that Silicon Valley should not be allowed unaccountable rule over humanity, and that we should avoid destroying everything to beat China. It seems like a more viable alliance than with the fairness and bias people; plus conservatives have way more power right now.
Mass unemployment
“UBI-based communism”
Acceleration to “beat China” forces sacrifice of a “happier future for your children and grandchildren”
Suppression of conservative ideas by big tech eg algorithmic suppression, demonetization
Various ways that tech destroys family values
Social media / AI addiction
Grok’s “hentai sex bots”
Transhumanism as an affront to God and to “human dignity and human flourishing”
“Tech assaulting the Judeo-Christian faith...”
Tech “destroying humanity”
Tech atrophying the brains of their children in school and destroying critical thought in universities.
Rule by unaccountable Silicon Valley elites lacking national loyalty.
Approximately none of those things are immediately relevant to AI safety, and some if not most of those are cases of strong divergence of interests and values (I already mentioned “UBI-based communism”). I don’t want to lean too much in arguing terminology, but most of this stuff I would in fact consider broadly speaking “protectionist” in the sense of seeking policies to clamp down (quantitatively and not qualitatively) on the adoption (not development, at least directly) of AI systems in particular contexts, which is neutral to negative in terms of AI safety.
The only things that could really be relevant to AI safety (like pushing back on arms race rhetoric, or antitrust policy against Silicon Valley), are already largely bipartisan to D-leaning, and strongly endorsed by the fairness and bias people, meaning national conservatives would only be useful as tie-breakers. This is good but I don’t really see the marginal utility of “building bridges” with occasional tie-breakers beyond single-issue campaigns (like the fight against the proposed federal AI regulation moratorium).
I expect fairness and bias people could support (and have supported) technical safety, AI x Animals (represented at FAccT 2025), and governance to counter the intelligence curse, but not global priorities research, AI welfare, or acceleration of defensive technologies (maybe? I guess what DAIR is doing could be called “acceleration of defensive technologies” if you squint a bit).
Alliance with them is objectively more viable. You can coherently argue that you should ally with both or neither, or with the one with the greatest intersection of commonly held positions, but arguing you should ally with the one with the least intersection of commonly held positions seems like a double standard motivated by prior political bias. (For the record, @Remmelt does in fact support allying with fairness and bias people, and is friends with Émile Torres.)
As an aside, I also don’t think this specific faction of conservatives (maybe worth tabooing the word? note I referred to “anti-techs” with left-coded and right-coded examples) have the required political power compared to opposing factions like the tech right or neocons (see e.g. the Iran strikes, or the H1B visa conflict), and the latter has considerable leverage over them considering the importance of miltech (like Palantir’s) for efforts national-conservatives still consider a lexical priority over bioconservatism (like ICE).
FWIW, I approximately don’t think any of those things matter compared to just not building AGI. Other people can disagree of course, but please do not count me as someone who thinks those things are of comparable importance!
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Part of the deal of being allies if you don’t have to be allies about everything. I don’t think they particularly need to do anything to help with technical safety (there just need to be people who understand and care about that somewhere). I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
I do think they eventually need to be on board with some version of the handling the intelligence curse (I didn’t know that term, here’s a link ), although I think in a lot of worlds the gameboard is so obviously changed I expect handling it to be an easier sell.
That’s true, but for the alliance to be functional you need at least a moderate trust in the fact that whatever your shared interests are, they will persist for long enough that the other party won’t stab you in the back right away (until your entire goal is to just gain as much time as you can before the inevitable betrayal). I think most of these reasons are shallow or misaligned enough, and the crowd in question has shown itself so fickle and prone to simply valuing blind loyalty to its leader over any other overarching values, that an alliance isn’t worth what it costs.
I would be more trusting in general of an alliance with e.g. various religious leaders and factions. Even for all our differences, I would think that they would genuinely think about human dignity as a core principle. But a lot of these people quoted just sound annoyed about the fact that the LLMs aren’t siding with them on the culture war, and would sing another tune if they did.
Thank you for editing (sentence was cut short in earlier version). Reiterating what I said to @habryka with the same remark:
Can you explain “defensive technologies”?
Do any of these defensive technologies allow people to survive an unaligned AI that they wouldn’t have survived without the defensive technology?
automated AI safety research, biosecurity, cybersecurity (including AI control), possibly traditional transhumanism (brain-computer interfaces, intelligence augmentation, whole brain emulation)
Anti- vs pro-tech is an outdated, needlessly primitive, and needlessly polarizing framework to look at the world. We should obviously consider which tech is net positive and build that, and which tech is net negative and regulate that at the point where it starts being so.
I think anti-tech v. pro-tech is in fact going to be more important a political axis orthogonal to the left-right axis as time goes on (and OP seems like clear evidence for that?), and the position you suggest is just ‘centrism’ on that axis. See fallacy of gray.
How would you define pro-tech, which I assume you identify as? For example, should AI replace humanity a) in any case if it can, b) only if it’s conscious, c) not at all?
Consider an axis where on one end you’ve got Shock Level Four and on an opposite end you’ve got John Zerzan. Anything in between is some gradation of gray where you accept some proportion p of all available technology.
Scifi was probably fun to think about for some in the 90s but things got more serious when it became clear the singularity could kill everyone we love. Yud bit the bullet and now says we should stop AI before it kills us. Did you bite that bullet too? If so, you’re not purely pro-tech anymore whether you like it or not. (Which I think shouldn’t matter because pro- and anti-tech has always been a silly way to look at the world.)
I think this is a silly argument, comparable to saying if you don’t want to bit the bullet of Esoteric Hitlerism you aren’t a true right-winger, or if you don’t want to bit the bullet of Posadism you aren’t a true left-winger. Yud as of right now believe we should have research intelligence augmentation technology to have supercharged AI safety researchers build Friendly AI right?
If we end up in a world with mass unemployment (like 90%), I expect those people currently self-identifying as conservatives to support strong redistribution of income, along with almost all others. I expect strong redistribution to happen in countries where democracy with income-independent voting rights is still alive by then, if any. In those where it’s not, maybe it won’t happen and people might die of starvation, be driven out of their homes, etc.
If we end up in a world with mass unemployment (like 90%), to be really blunt, I don’t expect the opinions of the unemployed to count for shit. What are they going to do? Strike with the jobs they don’t have, or violently riot against the newly minted drone armies of unyielding loyalty?
Do you believe mass unemployment will jump from ~0-10% in developed countries to 90% overnight? If not, the political question of whether to respond to unemployment increases by either redistribution or protectionism (of any kind – it likely won’t be immediately clear that AI and not other political grievances will be responsible) will be particularly salient in the short term.
I don’t really understand your thoughts about developing vs developed countries and protectionism, could you make them more explicit?
Sorry, typo. I didn’t meant to make a connection between those two, it’s just that many developing countries have higher unemployment rates for reasons that are not really relevant to what we’re talking about here.
Thanks for correcting it. I still don’t really get your connection between protectionism and mass unemployment. Perhaps you could make it explicit?
? Protectionism (whether against AI, or immigration, or trade) is often justified by concerns about job loss.
“Protectionism against AI” is a bit of an indirect way to point at not using AI for some tasks for job market reasons, but thanks for clarifying. Reducing immigration or trade won’t solve AI-induced job loss, right? I do agree that countries could decide to either not use AI, or redistribute AI-generated income, with the caveat that those choosing not to use AI may be outcompeted by those who do. I guess we could, theoretically, sign treaties to not use AI for some jobs anywhere.
I think AI-generated income redistribution is more likely though, since it seems the obviously better solution.
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
I agree. The “jesus” was halfway a joke about the religious ties. And halfway steeling myself for that handshake.
Geoffrey Miller is already a member of this community in good standing.
I like some of what Geoffrey does, but I do think at various points he has violated enough norms of reasonable discourse (especially on Twitter) that I wouldn’t consider him in “good standing”.
X (formerly known as Twitter) isn’t for ‘reasonable discourse’ according to the very specific and high epistemic standards of LessWrong.
X is for influence, persuasion, and impact. Which is exactly what AI safety advocates need, if we’re to have any influence, persuasion, or impact.
I’m comfortable using different styles, modes of discourse, and forms of outreach on X versus podcasts versus LessWrong versus my academic writing.
I agree the standards are quite different! Nevertheless, I do currently think you are being overly aggressive even by the standards I would have for the broader rationality community for what appropriate norms are for Twitter.
‘Overly aggressive’ is what the shooter who just assassinated conservative Charlie Kirk was being.
Posting hot takes on X is not being ‘aggressive’.
This is not a day when I will tolerate any conflation of posting strong words on social media with committing actual aggressive violence.
This is not the day for that.
I mean, my comment was written before at least I had heard any news of that, so I don’t really see its relevance to the conversation.
Also, I really don’t see the relevance of bringing in Charlie Kirk into this conversation at all. Like, if you want we can have a real conversation about whether marginally more aggressive comments on social media were partially response for it or not (seems plausible to me but I haven’t thought much about it), but I am not even sure what you mean by “this is not the day for that”, and it certainly isn’t related to really anything else in this comment section.
You accused me of being ‘overly aggressive’. I was pointing out that tweets aren’t acts of aggression. Shooting people in the neck is.
As far as I can remember, I’ve never called for violence, on any topic, in any of the 80,000 posts I’ve shared on Twitter/X, to my 150,000 followers. So, I think your claim that my posts are ‘overly aggressive’ is poorly calibrated in relation to what actual aggression looks like.
That’s the relevance of the assassination of Charlie Kirk. A reminder that in this LessWrong bubble of ever-so-cautious, ever-so-rational, ever-so-epistemically-pure discourse, people can get very disconnected from the reality of high-stakes political debate and ideologically-driven terrorism.
Of course words can be “aggressive”. Yes, they are a different form of aggression from literal physical violence, but we still have norms for words. Some tweets are obviously “acts of aggression”.
(Unless you mean to import some technical meaning with those words, in which case I am happy to clarify that I am not meaning to import any technical meaning behind “aggression” and just mean the obvious everyday usage of the word)
Regarding “calling for violence”: I can’t find any specific example scrolling through your past tweets, so it’s plausible I am wrong about this! I do think I remember some, but as you say yourself, you have >80,000 tweets and I don’t know of an efficient way to search through all of them. I apologize if it turns out to be wrong, I did not mean to imply a high level of confidence in that specific adjective. There are some tweets that I feel like someone could argue are calls for violence, though I don’t think any of the ones I’ve found with 5 minutes of searching obviously cross that line.
habryka—regarding what ‘aggression’ is, I’m coming to this from the perspective of having taught courses on animal behavior and human evolution for 35 years.
When biological scientists speak of ‘aggression’, we are referring to actual physical violence, e.g. hitting, biting, dismembering, killing, eating, within or between species. We are not referring to vocalizations, or animal signals, or their modern digital equivalents.
When modern partisan humans refer to ‘aggression’ metaphorically, this collapses the distinction between speech and violence. Which is, of course, what censors want, in order to portray speech that they don’t like as if it’s aggravated assault. This has become a standard chant on the Left: ‘speech = violence’.
I strongly disagree with that framing, because it is almost always an excuse for censorship, deplatforming, and ostracizing of political rivals.
I think to maintain the epistemic norms of the Rationality community, we must be very careful not to equate ‘verbal signals we don’t like’ with ‘acts of aggression’.
No, the usual term both in common usage and that biological scientists use for that kind of stuff is “violence”. Aggression very much includes speech. I would be surprised if you were to find biologists consistently avoiding the word aggression when e.g. referring to intimidation behavior between animals in lieu of actual physical contact.
Indeed, just the first Google result for “animal aggression behavior” looks like this:
This also aligns with the common usage of those words.
That said, I am very happy to use a different word for the context of this comment thread if you want. We don’t have to agree on the meanings of all words to have a conversation here.
I’ve seen Eliezer violate what I’d consider norms of reasonable discourse on Twitter. You too.
It’s not a norm of discourse that one cannot state that a position is absurd. And it is a virtue of discourse to show up and argue for one’s stances, as Habryka does throughout that thread!
Speaking as someone who makes very little effort to avoid honey consumption, my opinion of Habryka would have dropped much less if he’d said something like: “Sorry, this position is just intuitively absurd to me, and I’m happy to reject it on that basis.” So I don’t think the issue has to do with absurdity per se.
I said I thought he violated “what I’d consider reasonable norms of discourse”. You can see Ben West thought something similar.
I’d estimate that Habryka violated roughly 7 or 8 of the Hacker News commenting guidelines in that discussion.
Your ideas about reasonable discourse can be different from mine, and Ben West’s, and Hacker News’. That’s OK. I was just sharing my opinion.
It’s been a while since I read that discussion. I remember my estimation of Habryka dropped dramatically when I read it. Maybe I can try to reconstruct why in more detail if you want. But contrasting what Habryka wrote with the HN commenting guidelines seems like a reasonable starting point.
You’ll notice that Habryka doesn’t provide any concrete example of Geoffrey violating a norm of reasonable discourse in this thread. I did provide a concrete example.
Is it possible that invokation of such “norms” can be a mere figleaf for drawing ingroup/outgroup boundaries in the traditional tribalistic way?
Is it too much to ask that Dear Leadership is held to the same standards, and treated the same way, as everyone else is?
It’s fine for you to have takes about my commenting style on those threads. I do continue to think that post was really quite bad (where, to be clear, most of my objection is to the author somehow taking the result at face value, while feeling no need to caveat or justify yourself, and linking to the Rethink Priorities report as an authoritative source. I don’t have the same objection to e.g. this old post by Luke which raised this as a hypothesis but doesn’t make the same errors).
But even if you think I am really mistaken here, I don’t think there is almost any standard that would make sense to defend on LessWrong that Geoffrey doesn’t routinely violate.
Some quick examples:
This goes on and on and on and on. He has hundreds of tweets like this, with insults, calls for violence, extreme aggression, sneering dismissal, the whole rickamarole.
Again, I appreciate a bunch of his work, but I really don’t think that even by your lights we treat discourse anything close to the same.
I think some of these are shown in what you link, but ‘calls for violence’ I have not seen. I just searched for it a little, and mostly found him speaking against that.
I also found him to be consistently annoyed about people blurring the line between aggressive speech and physical violence. Here’s one example.
Hmm, I don’t super buy this. In 2024 he made a bunch of tweets of this shape:
This feels to me like it’s doing a bunch of blurring between aggressive speech and physical violence.
(It’s also IMO a particularly weird stance to have, because he is clearly calling AI labs an existential threat to democracy, as they are an existential threat to all human things, which by the same logic would be an incitement to violence, though I think they aren’t, but like maybe by Geoffrey’s own logic they are?)
Just to clarify here, I have no issue with you thinking the post is bad. That seems besides the point to me. My issue is with you doing much of what you accuse Miller of doing.
Insults: “This post seems completely insane to me, as do people who unquestionable retweet it.”
Aggression: “I cannot believe I have to argue for this… [cursing]...”
Sneering: “Has anyone who liked this actually read this post? How on earth is this convincing to anyone?”
Note also that the discussion around the Bentham post was previously calm and friendly. You walked in and dramatically worsened the discourse quality. By contrast, Geoffrey engages on hot-button political topics where discussion is already very heated.
As a quick and relatively objective measure, with a quick search, out of all 80K Geoffrey Miller tweets, I was only able to find one non-quoted f-bomb (“Fuck the Singularity.”).
Your tweets appear to have a somewhat larger number of them, and they’re often directed at individuals rather than abstract concepts. “Fuck them”, “fuck you”, “fuck [those people]”.
As a matter of simple intellectual honesty, it would be nice if you could acknowledge that you engage in insults and aggressive behavior on Twitter. You might be doing it less than Geoffrey does. You might express it in a different way. But it’s just a question of degree, as far as I can tell. I really don’t think you have much moral high ground here.
You also have far fewer tweets than Geoffrey does (factor of ~16 difference). So it’s not just that you’ve dropped more f-bombs than him; your density of f-bombs appears to be far higher.
I… again am happy to accept critique of my posting, but I think you are really weirdly off-base here. Feel free to ask some neutral third-party to do an evaluation of our commenting or tweeting styles and how they compare to local norms of discourse.
In-particular, who cares about using words like “fuck”? What does this have to do with anything? Saying “fuck them” is much less aggressive or bad than saying “Behind your pretty-boy mask, you’re a sociopathic ghoul. Glad that Americans are learning the truth about the deep, dark, bitter pit where your soul should be.”!
I have certainly said the former to friends or acquaintances many times and received it many times. If you ever hear me or anyone else say the latter (or anything like it) earnestly to you, I think something is seriously going wrong.
Saying ‘fuck them’ when people are shifting to taking actions that threaten society is expressing something that should be expressed, in my view.
I see Oliver replied that in response to two Epoch researchers leaving to found an AI start-up focussed on improving capabilities. I interpret it as ‘this is bad, dismiss those people’. It’s not polite though maybe for others who don’t usually swear, it comes across much stronger?
To me, if someone posts an intense-feeling negatively worded text in response to what other people are doing, it usually signals that there is something they care about that they perceive to be threatened. I’ve found it productive to try relate with that first, before responding. Jumping to enforcing general rules stipulated somewhere in the community, and then implying that the person not following those rules is not harmonious with or does not belong to the community, can get counterproductive.
(Note I’m not tracking much of what Oliver and Geoffrey have said here and on twitter. Just wanted to respond to this part.)
I’m a bit concerned about a situation where “insiders” always get this sort of contextual benefit-of-the-doubt, and “outsiders” don’t.
That’s a healthy hypothesis to track.
Agreed on tracking that hypothesis. It makes sense that people are more open to consider what’s said by an insider they look up to or know. In a few discussions I saw, this seemed a likely explanation.
Also, insiders tend to say more stuff that is already agreed on and understandable by others in the community.
Here there seems to be another factor:
Whether the person is expressing negative views that appear to support v.s. to be dissonant with core premises. With ‘core premises’, I mean beliefs about the world that much thinking shared in the community is based on, or tacitly relies on to be true.
In my experience (yours might be different), when making an argument that reaches a conclusion that contradicts a core premise in the community, I had to be painstakingly careful to be polite, route around understandable misinterpretations, and already address common objections. To be able to get to a conversation where the argument was explored somewhat openly.
It’s hard to have productive conversations that way. The person arguing against the ‘core premise’ bears by far the most cost trying to write out responses in a way that might be insightful for others (instead of dismissed too quickly). The time and strain this takes is mostly hidden to others.
Keep in mind that US conservatives are liable to be reading this thread, trying to determine whether they want to ally with a group such as yourselves. Conservatives have much more leverage to dictate alliance terms than you do. Note the alliance with the AI art people was apparently already wrecked. Something you might ask yourselves: If you can’t make nice with a guy like me, who shares more of your ideals than either artists or US conservatives do, how do you expect to make nice with US conservatives?
Reiterating what I said above that “conservatives” should be taboo’d here. It appears to me that this faction is flashy but do not have enough political capital or leverage to decide Republican policy relative to the tech right and neocons, and could only serve as tie-breaker in issues where Ds and (other) Rs disagree (e.g. antitrust policy). On the flip side, it’s worthwhile talking about how to interact with anti-techs whether they are left-coded (deep greens), right-coded (national conservatives), or whatever the anti-AI artists are.
To be clear, this is not my position! I am not an intuitionist or anything close to it. This position also seems absurd to me after thinking hard about moral philosophy, and as someone who is pretty sympathetic to general positions that morality can be quite counterintuitive and weird. Please do not summarize my position as arguing primarily from intuition!
Thanks. I’d better stay out of this until I know who that is :)
I think that didn’t tag/notify him but @geoffreymiller does, in case he wants to participate in the discussion.
Thanks for the tag. I’ve just started to read the comments here, and wrote an initial reply.