My model of Eliezer thinks relatively carefully about most of his comms, but sometimes he gets triggered and says some things in ways that seem quite abrasive (like the linked EA Forum comment). I think this is a thing that somewhat inevitably happens when you are online a lot, and end up arguing with a lot of people who themselves are acting quite unreasonably.
Like, if you look at almost anyone who posts a lot online in contexts that aren’t purely technical discussion, they almost all end up frequently snapping back at people. This is true of Gwern, Zvi, Buck, and to a lesser degree even Scott Alexander if you look at a bunch of his older writing, and most recently I can see even Kelsey Piper who has historically been extremely measured end up snapping back at people on Twitter in ways that suggests to me a lot of underlying agitation. I also do this not-too-infrequently.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes. I am currently tending towards the former, but it seems like a hard question I would like to think more about.
I dispute that I frequently snap at people. I just read over my last hundred or so LessWrong comments and I don’t think any of them are well characterized as snapping at someone. I definitely agree that I sometimes do this, but I think it’s a pretty small minority of things I post. I think Eliezer’s median level of obnoxious abrasive snappiness (in LessWrong comments over the last year) is about my 98th percentile.
I think your top-level answer on this very post is pretty well-characterized as snapping at someone, or at least part of the broader category of abrasiveness that this post is trying to point to (and I was also broadly pointing to in my comment).
I also think if you look at all of Eliezer’s writing he will very rarely snap at people. The vast majority of his public writing this year are in If Anyone Builds It and the associated appendices, which as far as I can tell contain zero snapping/abrasiveness/etc. My sense is also approximately zero of his media interviews on the book have contained this thing (though I am less confident of this, since I haven’t seen them all).
I don’t super want to litigate this, though happy to talk with you about this. I do think you are basically #2 in terms of people who do this in my mind who I am socially close to (substantially above everyone else except maybe Eliezer in that list, and I don’t know where I would place you relative to him). You do this much less in public, and much more in person and semi-public.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes.
My intuition is that if our in-person conversations left a trail of searchable documentation similar to our internet comments, it would be at least similarly unflattering, even for very mild-mannered people.
(Unlike real life it’s more available to conscious choice to be mild-mannered all the time, if you set your offense-vs-say-something threshhold in a sufficiently mild-mannered direction. I doubt one can be sufficiently influential as a personality though without setting that threshold more aggressively, however. I haven’t gotten in a stupid fight on the internet in a long time (that I can recall; my memory may flatter me) but when I posted more, boy howdy did I.)
I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
Your conception of “allies” seems… flawed given the history here. I don’t super want to litigate this, but this feels like a particularly weak analysis.
Yeah, that’s the problem. EA’s the most obvious community clearly invested and interested in the kind of AI safety issues Eliezer focuses on. There’s huge overlap between the AI safety and EA movement. To fail to recognize that, and carve time out of his day to compose naked, petty invective against EA over his disagreements, seems quite unpromising to me.
As a relevant point, he also writes things like this where he tries to reduce EAs unnecessarily beating themselves up. (I disagree with him on the facts, but I think it was a kind thing to do.)
I get why you read it as “kind.” But I have an alternative thesis:
Functionally, the essay erects a firewall between Eliezer and the FTX scandal.
While superficially “kind,” the essay is fundamentally infantilizing, absolving the community while denying them agency. This infantilization is crucial to building the firewall.
If you’re interested, I can expand on this.
Edit: Clarifying changes, especially to emphasize that I interpret the essay as containing motivated reasoning and self-interested spin, not that Eliezer is lying.
I’m not interested in making such a request for expanding on it, thanks for the offer. (I’m not asking you not to, to be clear.)
To respond to your point, you may be aware that there’s a large class of Singerian EAs that are pathologically self-guilting and taking-personal-responsibility-for-the-bad-things-in-the-world, and it was kind to some of them to point out what was believed to be a true argument for why that was not the case here. I don’t think it is primarily explained by self-serving motivation; and as evidence you can see from the comments that Eliezer was perfectly open to evidence he was mistaken (via encouraging Habryka to post their chat publicly where Habryka gave counterevidence), so I think it’s unfair to read poor intent into this, as opposed to genuine empathy/sympathy for people who are renowned for beating themselves up about things in the world that they are barely responsible for and have relatively little agency over.
it was kind to some of them to point out what was believed to be a true argument for why that was not the case here
I don’t see evidence in the post comments that it was received that way, though it’s possible those who read it as a true, helpful and kind didn’t respond, or did elsewhere.
Eliezer was perfectly open to evidence he was mistaken
I don’t think he’s a schemer or engaging in some kind of systematic project to silence dissent.
I think that statement is tricky (the AI Safety Movement is not a monolithic entity, and neither is EA). It seems more clear that most of EA is not much of an ally of Eliezer.
If it wasn’t obvious, I meant the term “ally” not in the sense of a formally codified relationaship, but to point out the uniquely high level of affinity, overlap, and shared concerns between the AI safety movement and EA.
There is a reason I said “ally,” rather than literally identifying EA as part of the AI safety movement or vice versa.
I think that snapping back at people is most likely caused by belief that the person at which one snapped did something clearlystupid or hasn’t bothered to do a basic search of related literature.
My model of Eliezer thinks relatively carefully about most of his comms, but sometimes he gets triggered and says some things in ways that seem quite abrasive (like the linked EA Forum comment). I think this is a thing that somewhat inevitably happens when you are online a lot, and end up arguing with a lot of people who themselves are acting quite unreasonably.
Like, if you look at almost anyone who posts a lot online in contexts that aren’t purely technical discussion, they almost all end up frequently snapping back at people. This is true of Gwern, Zvi, Buck, and to a lesser degree even Scott Alexander if you look at a bunch of his older writing, and most recently I can see even Kelsey Piper who has historically been extremely measured end up snapping back at people on Twitter in ways that suggests to me a lot of underlying agitation. I also do this not-too-infrequently.
I feel pretty confused about the degree to which this is just a necessary part of having conversations on the internet, or to what degree this is a predictable way people make mistakes. I am currently tending towards the former, but it seems like a hard question I would like to think more about.
I dispute that I frequently snap at people. I just read over my last hundred or so LessWrong comments and I don’t think any of them are well characterized as snapping at someone. I definitely agree that I sometimes do this, but I think it’s a pretty small minority of things I post. I think Eliezer’s median level of obnoxious abrasive snappiness (in LessWrong comments over the last year) is about my 98th percentile.
I think your top-level answer on this very post is pretty well-characterized as snapping at someone, or at least part of the broader category of abrasiveness that this post is trying to point to (and I was also broadly pointing to in my comment).
I also think if you look at all of Eliezer’s writing he will very rarely snap at people. The vast majority of his public writing this year are in If Anyone Builds It and the associated appendices, which as far as I can tell contain zero snapping/abrasiveness/etc. My sense is also approximately zero of his media interviews on the book have contained this thing (though I am less confident of this, since I haven’t seen them all).
I don’t super want to litigate this, though happy to talk with you about this. I do think you are basically #2 in terms of people who do this in my mind who I am socially close to (substantially above everyone else except maybe Eliezer in that list, and I don’t know where I would place you relative to him). You do this much less in public, and much more in person and semi-public.
My intuition is that if our in-person conversations left a trail of searchable documentation similar to our internet comments, it would be at least similarly unflattering, even for very mild-mannered people.
(Unlike real life it’s more available to conscious choice to be mild-mannered all the time, if you set your offense-vs-say-something threshhold in a sufficiently mild-mannered direction. I doubt one can be sufficiently influential as a personality though without setting that threshold more aggressively, however. I haven’t gotten in a stupid fight on the internet in a long time (that I can recall; my memory may flatter me) but when I posted more, boy howdy did I.)
I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
Your conception of “allies” seems… flawed given the history here. I don’t super want to litigate this, but this feels like a particularly weak analysis.
You don’t think EA is an ally of the AI safety movement?
Eliezer definitely doesn’t think of it as an ally (or at least, not a good ally who he is appreciative of and wants to be on good terms with).
Yeah, that’s the problem. EA’s the most obvious community clearly invested and interested in the kind of AI safety issues Eliezer focuses on. There’s huge overlap between the AI safety and EA movement. To fail to recognize that, and carve time out of his day to compose naked, petty invective against EA over his disagreements, seems quite unpromising to me.
As a relevant point, he also writes things like this where he tries to reduce EAs unnecessarily beating themselves up. (I disagree with him on the facts, but I think it was a kind thing to do.)
I get why you read it as “kind.” But I have an alternative thesis:
Functionally, the essay erects a firewall between Eliezer and the FTX scandal.
While superficially “kind,” the essay is fundamentally infantilizing, absolving the community while denying them agency. This infantilization is crucial to building the firewall.
If you’re interested, I can expand on this.
Edit: Clarifying changes, especially to emphasize that I interpret the essay as containing motivated reasoning and self-interested spin, not that Eliezer is lying.
I’m not interested in making such a request for expanding on it, thanks for the offer. (I’m not asking you not to, to be clear.)
To respond to your point, you may be aware that there’s a large class of Singerian EAs that are pathologically self-guilting and taking-personal-responsibility-for-the-bad-things-in-the-world, and it was kind to some of them to point out what was believed to be a true argument for why that was not the case here. I don’t think it is primarily explained by self-serving motivation; and as evidence you can see from the comments that Eliezer was perfectly open to evidence he was mistaken (via encouraging Habryka to post their chat publicly where Habryka gave counterevidence), so I think it’s unfair to read poor intent into this, as opposed to genuine empathy/sympathy for people who are renowned for beating themselves up about things in the world that they are barely responsible for and have relatively little agency over.
I don’t see evidence in the post comments that it was received that way, though it’s possible those who read it as a true, helpful and kind didn’t respond, or did elsewhere.
I don’t think he’s a schemer or engaging in some kind of systematic project to silence dissent.
What do you mean by “ally” (in this context)?
Institutional support, funding, positive and persistent community interest, dialog, support, and professional participation. Examples:
Open Phil
FTX Future Fund (extremely bad allyship, but still was regarded as allyship until it went down in flames)
80,000 hours, MATS
MIRI has been heavily supported by EA donors
Anthropic safety influences
FHI (now closed) and CSER gave AI safety intellectual credibility and were staffed and funded by EAs
Take the above as my beliefs and understanding based on years of interaction, but no systematic up-to-date investigation.
I think that statement is tricky (the AI Safety Movement is not a monolithic entity, and neither is EA). It seems more clear that most of EA is not much of an ally of Eliezer.
No collective entity is a monolith.
If it wasn’t obvious, I meant the term “ally” not in the sense of a formally codified relationaship, but to point out the uniquely high level of affinity, overlap, and shared concerns between the AI safety movement and EA.
There is a reason I said “ally,” rather than literally identifying EA as part of the AI safety movement or vice versa.
Yep, I am not trying to insist on a particularly narrow definition of “ally”.
I think that snapping back at people is most likely caused by belief that the person at which one snapped did something clearly stupid or hasn’t bothered to do a basic search of related literature.
Does it help?