I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
Your conception of “allies” seems… flawed given the history here. I don’t super want to litigate this, but this feels like a particularly weak analysis.
Yeah, that’s the problem. EA’s the most obvious community clearly invested and interested in the kind of AI safety issues Eliezer focuses on. There’s huge overlap between the AI safety and EA movement. To fail to recognize that, and carve time out of his day to compose naked, petty invective against EA over his disagreements, seems quite unpromising to me.
As a relevant point, he also writes things like this where he tries to reduce EAs unnecessarily beating themselves up. (I disagree with him on the facts, but I think it was a kind thing to do.)
I get why you read it as “kind.” But I have an alternative thesis:
Functionally, the essay erects a firewall between Eliezer and the FTX scandal.
While superficially “kind,” the essay is fundamentally infantilizing, absolving the community while denying them agency. This infantilization is crucial to building the firewall.
If you’re interested, I can expand on this.
Edit: Clarifying changes, especially to emphasize that I interpret the essay as containing motivated reasoning and self-interested spin, not that Eliezer is lying.
I’m not interested in making such a request for expanding on it, thanks for the offer. (I’m not asking you not to, to be clear.)
To respond to your point, you may be aware that there’s a large class of Singerian EAs that are pathologically self-guilting and taking-personal-responsibility-for-the-bad-things-in-the-world, and it was kind to some of them to point out what was believed to be a true argument for why that was not the case here. I don’t think it is primarily explained by self-serving motivation; and as evidence you can see from the comments that Eliezer was perfectly open to evidence he was mistaken (via encouraging Habryka to post their chat publicly where Habryka gave counterevidence), so I think it’s unfair to read poor intent into this, as opposed to genuine empathy/sympathy for people who are renowned for beating themselves up about things in the world that they are barely responsible for and have relatively little agency over.
it was kind to some of them to point out what was believed to be a true argument for why that was not the case here
I don’t see evidence in the post comments that it was received that way, though it’s possible those who read it as a true, helpful and kind didn’t respond, or did elsewhere.
Eliezer was perfectly open to evidence he was mistaken
I don’t think he’s a schemer or engaging in some kind of systematic project to silence dissent.
I think that statement is tricky (the AI Safety Movement is not a monolithic entity, and neither is EA). It seems more clear that most of EA is not much of an ally of Eliezer.
If it wasn’t obvious, I meant the term “ally” not in the sense of a formally codified relationaship, but to point out the uniquely high level of affinity, overlap, and shared concerns between the AI safety movement and EA.
There is a reason I said “ally,” rather than literally identifying EA as part of the AI safety movement or vice versa.
I think it’s important to distinguish irritation from insult. The internet is a stressful place to communicate. Being snappish and irritable is normal. And many people insult specific groups they disagree with, at least occasionally.
What sets Eliezer apart from Gwern, Scott Alexander and Zvi is that he insults his allies.
That is not a recipe for political success. I think it makes sense to question whether he’s well suited to the role of public communicator about AI safety issues, given this unusual personality trait of his.
Your conception of “allies” seems… flawed given the history here. I don’t super want to litigate this, but this feels like a particularly weak analysis.
You don’t think EA is an ally of the AI safety movement?
Eliezer definitely doesn’t think of it as an ally (or at least, not a good ally who he is appreciative of and wants to be on good terms with).
Yeah, that’s the problem. EA’s the most obvious community clearly invested and interested in the kind of AI safety issues Eliezer focuses on. There’s huge overlap between the AI safety and EA movement. To fail to recognize that, and carve time out of his day to compose naked, petty invective against EA over his disagreements, seems quite unpromising to me.
As a relevant point, he also writes things like this where he tries to reduce EAs unnecessarily beating themselves up. (I disagree with him on the facts, but I think it was a kind thing to do.)
I get why you read it as “kind.” But I have an alternative thesis:
Functionally, the essay erects a firewall between Eliezer and the FTX scandal.
While superficially “kind,” the essay is fundamentally infantilizing, absolving the community while denying them agency. This infantilization is crucial to building the firewall.
If you’re interested, I can expand on this.
Edit: Clarifying changes, especially to emphasize that I interpret the essay as containing motivated reasoning and self-interested spin, not that Eliezer is lying.
I’m not interested in making such a request for expanding on it, thanks for the offer. (I’m not asking you not to, to be clear.)
To respond to your point, you may be aware that there’s a large class of Singerian EAs that are pathologically self-guilting and taking-personal-responsibility-for-the-bad-things-in-the-world, and it was kind to some of them to point out what was believed to be a true argument for why that was not the case here. I don’t think it is primarily explained by self-serving motivation; and as evidence you can see from the comments that Eliezer was perfectly open to evidence he was mistaken (via encouraging Habryka to post their chat publicly where Habryka gave counterevidence), so I think it’s unfair to read poor intent into this, as opposed to genuine empathy/sympathy for people who are renowned for beating themselves up about things in the world that they are barely responsible for and have relatively little agency over.
I don’t see evidence in the post comments that it was received that way, though it’s possible those who read it as a true, helpful and kind didn’t respond, or did elsewhere.
I don’t think he’s a schemer or engaging in some kind of systematic project to silence dissent.
What do you mean by “ally” (in this context)?
Institutional support, funding, positive and persistent community interest, dialog, support, and professional participation. Examples:
Open Phil
FTX Future Fund (extremely bad allyship, but still was regarded as allyship until it went down in flames)
80,000 hours, MATS
MIRI has been heavily supported by EA donors
Anthropic safety influences
FHI (now closed) and CSER gave AI safety intellectual credibility and were staffed and funded by EAs
Take the above as my beliefs and understanding based on years of interaction, but no systematic up-to-date investigation.
I think that statement is tricky (the AI Safety Movement is not a monolithic entity, and neither is EA). It seems more clear that most of EA is not much of an ally of Eliezer.
No collective entity is a monolith.
If it wasn’t obvious, I meant the term “ally” not in the sense of a formally codified relationaship, but to point out the uniquely high level of affinity, overlap, and shared concerns between the AI safety movement and EA.
There is a reason I said “ally,” rather than literally identifying EA as part of the AI safety movement or vice versa.
Yep, I am not trying to insist on a particularly narrow definition of “ally”.