“a dialogue with myself concerning eliezer yudkowsky” (not author)

This is a linkpost[1], but I don’t think the author would be happy to get commentary from people here on their blog, so I’ve copied the post here. link is at the bottom, if you really really want to comment on the original; please don’t, though.


red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation.

blue: what did he even think was going to happen after the time piece? this is the kind of shit that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about consequentialism?

(i thought initially that blue was going to disagree with red but no, blue is just mad in a different way)

https://​​twitter.com/​​mattparlmer/​​status/​​1641232557374160897?s=20

red: it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet.

[Note: link intentionally non-clickable, and author intentionally cropped [edit: whoops guess it wasn’t? uh sorry], to respect author’s separation from this community. If you want to see the whole tweet thread, which is in fact very interesting, here’s the link: https://​​twitter.com/​​QiaochuYuan/​​status/​​1542781304621518848?s=20]

blue: speaking of consequentialism the man dedicated his entire life to trying to warn people about the dangers of AI risk and, by his own admission, the main thing his efforts accomplished was get a ton of people interested in AI, help both openAI and deepmind come into existence, and overall make the AI situation dramatically worse by his own standards. what a fucking clown show. openAI is his torment nexus.

yellow: i just want to point out that none of this is actually a counterargument to -

red: yellow, shut the FUCK up -

yellow: like i get it, i get it, okay, we need to come to terms with how we feel about this whole situation, but after we do that we also need to maybe, like, actually decide what we believe? which might require some actual thought and actual argument?

red: if i never have another thought about AI again it’ll be too soon. i would rather think about literally anything else. i would rather think about dung beetles.

yellow: heh remember that one tweet about dung beetles -

https://​​twitter.com/​​SarahAMcManus/​​status/​​1119021587561369602?s=20

red, blue: NOT THE TIME.

yellow: it’s a good tweet though, you know i love a good tweet.

red: we all love a good tweet. now. as i was saying. the problem is eliezer fucking yudkowsky thinks he can save the world with fear and paranoia and despair. in his heart he’s already given up! the “death with dignity” post was a year ago! it’s so clear from looking at him and reading his writing that whatever spark he had 15 years ago when he was writing the sequences is gone now. i almost feel sorry for him.

blue: the thing that really gets my goat about the whole airstrikes-on-datacenters proposal is it requires such a bizarre mix of extremely high and extremely low trust to make any sense—on the one hand, that you trust people so little not to abuse access to GPUs that you can’t let a single one go rogue, and on the other hand, that you trust the political process so much to coordinate violence perfectly against rogue GPUs and nothing else. “shut down all the large GPU clusters,” “no exceptions for anyone, including governments and militaries”—none of the sentences here have a subject. who is supposed to be doing this, eliezer???

red: not that i should be surprised by this point but i think way too many people are being fooled by the fact that he still talks in the rationalist register, so people keep being drawn into engaging with his ideas intellectually at face value instead of paying attention to the underlying emotional tone, which is insane. there’s no reason to take the airstrikes-on-datacenters proposal at face value. all it does is communicate how much despair he feels, that this is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world.

blue: ugh i don’t even want to talk about this anymore, now i actually do feel sorry for him. if his inner circle had any capacity to stand up to him at all they’d be strong-arming him into a nice quiet retirement somewhere. his time in the spotlight is over. he’s making the same points in the same language now as he was 10 years ago. it’s clear he neither can nor wants to change or grow or adapt in any real way.

yellow: so what should everyone be doing instead? who should everyone be listening to if not eliezer?

red: i have no idea. that’s the point. eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality—a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands.

blue, yellow: amen.

  1. ^