“a dialogue with myself concerning eliezer yudkowsky” (not author)
This is a linkpost[1], but I don’t think the author would be happy to get commentary from people here on their blog, so I’ve copied the post here. link is at the bottom, if you really really want to comment on the original; please don’t, though.
red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation.
blue: what did he even think was going to happen after the time piece? this is the kind of shit that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about consequentialism?
(i thought initially that blue was going to disagree with red but no, blue is just mad in a different way)
https://twitter.com/mattparlmer/status/1641232557374160897?s=20
red: it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet.
[Note: link intentionally non-clickable, and author intentionally cropped [edit: whoops guess it wasn’t? uh sorry], to respect author’s separation from this community. If you want to see the whole tweet thread, which is in fact very interesting, here’s the link: https://twitter.com/QiaochuYuan/status/1542781304621518848?s=20]
blue: speaking of consequentialism the man dedicated his entire life to trying to warn people about the dangers of AI risk and, by his own admission, the main thing his efforts accomplished was get a ton of people interested in AI, help both openAI and deepmind come into existence, and overall make the AI situation dramatically worse by his own standards. what a fucking clown show. openAI is his torment nexus.
yellow: i just want to point out that none of this is actually a counterargument to -
red: yellow, shut the FUCK up -
yellow: like i get it, i get it, okay, we need to come to terms with how we feel about this whole situation, but after we do that we also need to maybe, like, actually decide what we believe? which might require some actual thought and actual argument?
red: if i never have another thought about AI again it’ll be too soon. i would rather think about literally anything else. i would rather think about dung beetles.
yellow: heh remember that one tweet about dung beetles -
https://twitter.com/SarahAMcManus/status/1119021587561369602?s=20
red, blue: NOT THE TIME.
yellow: it’s a good tweet though, you know i love a good tweet.
red: we all love a good tweet. now. as i was saying. the problem is eliezer fucking yudkowsky thinks he can save the world with fear and paranoia and despair. in his heart he’s already given up! the “death with dignity” post was a year ago! it’s so clear from looking at him and reading his writing that whatever spark he had 15 years ago when he was writing the sequences is gone now. i almost feel sorry for him.
blue: the thing that really gets my goat about the whole airstrikes-on-datacenters proposal is it requires such a bizarre mix of extremely high and extremely low trust to make any sense—on the one hand, that you trust people so little not to abuse access to GPUs that you can’t let a single one go rogue, and on the other hand, that you trust the political process so much to coordinate violence perfectly against rogue GPUs and nothing else. “shut down all the large GPU clusters,” “no exceptions for anyone, including governments and militaries”—none of the sentences here have a subject. who is supposed to be doing this, eliezer???
red: not that i should be surprised by this point but i think way too many people are being fooled by the fact that he still talks in the rationalist register, so people keep being drawn into engaging with his ideas intellectually at face value instead of paying attention to the underlying emotional tone, which is insane. there’s no reason to take the airstrikes-on-datacenters proposal at face value. all it does is communicate how much despair he feels, that this is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world.
blue: ugh i don’t even want to talk about this anymore, now i actually do feel sorry for him. if his inner circle had any capacity to stand up to him at all they’d be strong-arming him into a nice quiet retirement somewhere. his time in the spotlight is over. he’s making the same points in the same language now as he was 10 years ago. it’s clear he neither can nor wants to change or grow or adapt in any real way.
yellow: so what should everyone be doing instead? who should everyone be listening to if not eliezer?
red: i have no idea. that’s the point. eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality—a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands.
blue, yellow: amen.
I have to thank you. I was spiraling due to yudkowsky’s writings. I even posted a question about what to do because I was paralyzed by the fear. This is a helpful post
I will say—unfortunately, we are in a tight situation. but, eliezer’s approach to communicating is a bit … much. it is the case that humanity has to respond quickly. But I think that, by doing that, we can do that. Just don’t think you’re worried on your own, or that you need to be paralyzed. yudkowsky’s approach to communicating causes that, and as I’m sure you know from your context in fighting, sometimes there are conflicts on earth. this conflict is between [humans and mostly-friendly ai] and [unfriendly ai], and there is in fact reason to believe that security resistance to unfriendly ai is meaningfully weak. I unfortunately do not intend to fully reassure, but we can survive this, let’s figure out how—I think the insights about how to make computers and biology defensibly secure suddenly, well, do exist, but aren’t trivial to find and use.
Also very relevant:
https://justathought.wiki/videos/2021/09/13/Lets_talk_about_soft_language_and_sensationalism
https://www.youtube.com/watch?v=dUEQveTKH90
The author is visible in the next screenshot, unless you meant something else (also, even if he wasn’t, the name is part of the URL).
hello I’m an idiot
I vaguely remember him saying things like that. This wasn’t it, but there was some tweet like “Everyone was getting along and then Elon blew that all up”.
I kinda get the impression he’s completely wrong about that though, the fact that OpenAI and Deepmind (with its current culture) are the leaders of the field is way better than what I was expecting 10 years ago and seems like a clear win. It’s not optimal, but it’s better than the path we were on (AGI coming out of, finance, or blind mercenary orgs like Google, Microsoft, or Facebook AI), and Eliezer is the one who moved the path.
And was he serious? Does he really not see that? Is that assessment of his beliefs really coming from a deep reading of a couple of tweets? How much of this is due to the effects that twitter has had on his writing and your perception of him?
I must admit as an outsider I am somewhat confused as to why Eliezer’s opinion is given so much weight, relative to all the other serious experts that are looking into AI problems. I understand why this was the case a decade ago, when not many people were seriously considering the issues, but now there are AI heavyweights like Stuart Russell on the case, whose expertise and knowledge of AI is greater than Eliezer’s, proven by actual accomplishments in the field. This is not to say Eliezer doesn’t have achievements to his belt, but I find his academic work lackluster when compared to his skills in awareness raising, movement building, and persuasive writing.
Isn’t Stuart Russell an AI doomer as well, separated from Eliezer only by nuances? Are you asking why Less Wrong favors Eliezer’s takes over his?
well it’s more than eliezer is being loud right now and so he’s affecting what folks are talking about a lot. stuart russell level shouting is the open letter, then eliezer shows up and goes “I can be louder than you!” and says to ban datacenters internationally by treaty as soon as possible, using significant military threat in negotiation.
I’m only going off of his book and this article, but I think they differ in far more than nuances. Stuart is saying “I don’t want my field of research destroyed”, while Eliezer is suggesting a global treaty to airstrike all GPU clusters, including on nuclear-armed nations. He seems to think the control problem is solvable if enough effort is put into it.
Eliezers beliefs are very extreme, and almost every accomplished expert disagrees with him. I’m not saying you should stop listening to his takes, just that you should pay more attention to other people.
You know the expression “hope for the best, prepare for the worst”? A true global ban on advanced AI is “preparing for the worst”—the worst case being (1) sufficiently advanced AI has a high risk of killing us all, unless we know exactly how to make it safe, and (2) we are very close to the threshold of danger.
Regarding (2), we may not know how close we are to the threshold of danger, but we have already surpassed a certain threshold of understanding (see the quote in Stuart Russell’s article—“we have no idea” whether GPT-4 forms its own goals), and capabilities are advancing monthly—ChatGPT, then GPT-4, now GPT-4 with reflection. Because performance depends so much on prompt engineering, we are very far from knowing the maximum capabilities of the LLMs we already have. Sufficient reflection applied to prompt engineering may already put us on the threshold of danger. It’s certainly driving us into the unknown.
Regarding (1), the attitude of the experts seems to be, let’s hope it’s not that dangerous, and/or not that hard to figure out safety, before we arrive at the threshold of danger. That’s not “preparing for the worst”; that’s “hoping for the best”.
Eliezer believes that with overwhelming probability, creating superintelligence will kill us unless we have figured out safety beforehand. I would say the actual risk is unknown, but it really could be huge. The combination of power and unreliability we already see in language models, gives us a taste of what that’s like.
Therefore I agree with Eliezer that in a safety-first world, capable of preparing for the worst in a cooperative way, we would see something like a global ban on advanced AI; at least until the theoretical basis of AI safety was more or less ironclad. We live in a very different world, a world of commercial and geopolitical competition that is driving an AI capabilities race. For that reason, and also because I am closer to the technical side than the political side, I prefer to focus on achieving AI safety rather than banning advanced AI. But let’s not kid ourselves; the current path involves taking huge unknown risks, and it should not have required a semi-outsider like Eliezer to forcefully raise, not just the idea of a pause, but the idea of a ban.
Sorry, I should have specified, I am very aware of Eliezers beliefs. I think his policy prescriptions are reasonable, if his beliefs are true. I just don’t think his beliefs are true. Established AI experts have heard his arguments with serious consideration and an open mind, and still disagree with them. This is evidence that they are probably flawed, and I don’t find it particularly hard to think of potential flaws in his arguments.
The type of global ban envisioned by yudkowsky really only makes sense if you agree with his premises. For example, setting the bar at “more powerful than GPT-5” is a low bar that is very hard to enforce, and only makes sense given certain assumptions about the compute requirement for AGI. The idea that bombing any datacentres in nuclear-armed nations is “worth it” only makes sense if you think that any particular cluster has an extremely high chance of killing everyone, which I don’t think is the case.
I think Eliezer’s current attitude is actually much closer to how an ordinary person thinks or would think about the problem. Most people don’t feel a driving need to create a potential rival to the human race in the first place! It’s only those seduced by the siren call of technology, or who are trying to engage with the harsh realities of political and economic power, who think we just have to keep gambling in our current way. Any politician who seriously tried to talk about this issue would soon be trapped between public pressure to shut it all down, and private pressure to let it keep happening.
It may be hard to enforce but what other kind of ban would be meaningful? Consider just GPT 3.5 and 4, embedded in larger systems that give them memory, reflection, and access to the real world, something which multiple groups are working on right now. It would require something unusual for that not to lead to “AGI” within a handful of years.
A big part of it is simply that he’s still very good at being loud and sounding intensely spooky. He also doesn’t do a very good job explaining his reasons and has leveled up his skill in explaining why it seems spooky to him without ever explaining the mechanics of the threat, because he did a good job thinking abstractly and did not do a good job compiling that into median-human-understandable explanation. Notice how oddly he talks—it’s related to why he realized there was a problem, I suspect.
I have seen him on video several times, including the Bankless podcast, and it has never seemed to me that he talks at all “oddly”. What seems “odd” to you?
Talking like a rationalist. I do it too, so do you.
I don’t know what you’re pointing to with that, but I don’t see any “rationalistic” manner that distinguishes him from, say, his interlocutors on Bankless, or from Lex Fridman. (I’ve not seen Eliezer’s conversation with him, but I’ve seen other interviews by Fridman.)
I mean, he’s really smart, and articulate, and has thought about these things for a long time, and can speak spontaneously and cogently to the subject, and field unprearranged questions. Being in the top whatever percentile in these attributes is, by definition, uncommon, but not “odd”, which means more than just uncommon.
The people here,. on lesswrong,. give EY’s opinion a lot of weight because LW was founded by EY, and functions as a kind of fan club.
https://www.lesswrong.com/posts/Ndtb22KYBxpBsagpj/eliezer-yudkowsky-facts