These seemed like obvious mistakes even at the time (I wrote posts/comments arguing against them)
Ok. If that’s true then yeah, you might a very good strategic thinker about AGI X-risk. Yudkowsky still probably wins, given the evidence I currently have. He’s been going really hard at it for >20 years. You can criticize the writing style of LW, and say how in general he could have been deferred-to more gracefully, and I’m very open to that and somewhat interested in that.
Maybe I’m misunderstanding what you’re saying though / not addressing it. If someone had been building out the conceptual foundations of AGI X-derisking via social victory for >20 years, they’d probably have a strong claim to being the best strategic thinker on AGI X-risk in my book.
so I feel like the over-deference to Eliezer is a completely different phenomenon from “But you can’t become a simultaneous expert on most of the questions that you care about.” or has very different causes.
I’m not saying it is! You may have misread. (Or maybe I misspoke—if so, sorry, I’m not rereading my post but I can if you think I did say this.) I’m saying that SOME deference is probably unavoidable, BUT there’s a lot of ACTUAL deference (such as the examples I cited involving Yudkowsky!) that is BAD, so we should try to NOT DO THE BAD ONES but in a way that doesn’t NECESSARILY involve “just don’t defer at all”.
In other words, if you were going to spend your career on AI x-safety, of course you could have become an expert on these questions first.
No? They’re all really difficult questions. Even being an expert in one of these would be at least a career. I mean, maybe YOU can, but I can’t, and I definitely can’t do so when I’m just a kid starting to think about how to help with X-derisking.
I mean I’m obviously not arguing “don’t seriously investigate the crucial questions in your field for yourself”, or even “don’t completely unwind all your deference about strategy, all the way to the top, using your full power of critique, and start figuring things out actually from scratch”. I’ve explicitly told dozens of relative newcomers (2016--2019, roughly) to AGI X-derisking that they should stop trying so hard to defer, that there are several key dangers of deference, that they should try to become experts in key questions even if that would take a lot of effort, that the only way to be a really serious X-derisker is to start your work on planting questions about key elements, etc. My point is that
{people, groups, funders, organizations, fields} do in fact end up deferring, and
probably quite a lot of this is unavoidable, or at least unavoidable for now / given what we know about how to do group rationality,
but also deference has a ton of bad effects, so
we should figure out how to have less of those bad effects—and not just via “defer less”.
a huge amount of strategic background; as a consequence of being good strategic background, they shifted many people to working on this”
Maybe we should distinguish between being good at thinking about / explaining strategic background, versus being actually good at strategy per se, e.g. picking high-level directions or judging overall approaches? I think he’s good at the former, but people mistakenly deferred to him too much on the latter.
It would make sense that one could be good at one of these and less good at the other, as they require somewhat different skills. In particular I think the former does not require one to be able to think of all of the crucial considerations, or have overall good judgment after taking them all into consideration.
No? They’re all really difficult questions. Even being an expert in one of these would be at least a career. I mean, maybe YOU can, but I can’t, and I definitely can’t do so when I’m just a kid starting to think about how to help with X-derisking.
So Eliezer could become experts in all of them starting from scratch, but you couldn’t even though you could build upon his writings and other people’s? What was/is your theory of why he is so much above you in this regard? (“Being a kid” seems a red herring since Eliezer was pretty young when he did much of his strategic thinking.)
I think he’s good at the former, but people mistakenly deferred to him too much on the latter.
I agree and I said as much, but this also seems like a non sequitur if you’re just trying to say he’s not the best strategic thinker. Someone can be the best and also be “overrated” (or rather, overly deferred-to). I’m saying he is both. The “thinking about / explaining strategic background” is strong evidence of actually being good at strategy. Separately, Yudkowsky is the biggest creator of our chances of social victory, via LW/X-derisking sphere! (I’m not super confident of that, but pretty confident? Any other candidates?) So it’s a bit hard to argue that he didn’t pick that strategic route as well as the technical route! You can’t grade Yudkowsky on his own special curve just for all his various attempts at X-derisking, and then separately grade everyone else.
It would make sense that one could be good at one of these and less good at the other, as they require somewhat different skills. In particular I think the former does not require one to be able to think of all of the crucial considerations, or have overall good judgment after taking them all into consideration.
Ok. I mean, true. I guess someone could suggest alternative candidates, though I’m noticing IDK why to care much about this question.
(I continue to have a sense that you’re misunderstanding what I’m saying, as described earlier, and also not sure what’s interesting about this topic. My bid would be, if there’s something here that seems interesting or important to you, that you would say a bit about what that is and why, as a way of recentering. It seems like you’re trying to drill down into particulars, but you keep being like “So why do you think X?” and I’m like “I don’t think X.”.)
By saying that he was the best strategic thinker, it seems like you’re trying to justify deferring to him on strategy (why not do that if he is actually the best), while also trying to figure out how to defer “gracefully”, whereas I’m questioning whether it made sense to defer to him at all, when you could have taken into account his (and other people’s) writings about strategic background, and then looked for other important considerations and formed your own judgments.
Another thing that interests me is that several of his high-level strategic judgments seemed wrong or questionable to me at the time (as listed in my OP, and I can look up my old posts/comments if that would help), and if it didn’t seem that way to others, I want to understand why. Was Eliezer actually right, given what we knew at the time? Did it require a rare strategic mind to notice his mistakes? Or was it a halo effect, or the effect of Eliezer writing too confidently, or something else, that caused others to have a cognitive blind spot about this?
By saying that he was the best strategic thinker, it seems like you’re trying to justify deferring to him on strategy (why not do that if he is actually the best)
No. You’re totally hallucinating this and also not updating when I’m repeatedly telling you no. It’s also the opposite of the point hammered in by the OP. My entire post is complaining about problems with deferring, and it links a prior post I wrote laying out these problems in detail, and I linked that essay to you again, and I linked several other writings explaining more how I’m against deferring and tell people not to defer repeatedly and in different ways. I bring up Eliezer to say “Look, we deferred to the best strategic thinker, and even though he’s the best strategic thinker, deferring was STILL really bad.”. Since I’ve described how deferring is really bad in several other places, here in THIS post I’m asking, given that we’re going to defer despite its costs, and given that to some extent at the end of the day we do have to defer on many things, what can we do to alleviate some of those problems?
And then you’re like “Ha. Why not just not defer?”.
Since I’ve described how deferring is really bad in several other places, here in THIS post I’m asking, given that we’re going to defer despite its costs, and given that to some extent at the end of the day we do have to defer on many things, what can we do to alleviate some of those problems?
Ok, it looks like part of my motivation for going down this line of thought was based on a misunderstanding. But to be fair, in this post after you asked “What should we have done instead?” with regard to deferring to Eliezer, you didn’t clearly say “we should have not deferred or deferred less”, but instead wrote “We don’t have to stop deferring, to avoid this correlated failure. We just have to say that we’re deferring.” Given that this is a case where many people could have and should have not deferred, this just seems like a bad example to illustrate “given that to some extent at the end of the day we do have to defer on many things, what can we do to alleviate some of those problems?”, leading to the kind of confusion I had.
Also, another part of my motivation is still valid and I think it would be interesting to try to answer why didn’t you (and others) just not defer? Not in a rhetorical sense, but what actually caused this? Was it age as you hinted earlier? Was it just human nature to want to defer to someone? Was it that you were being paid by an organization that Eliezer founded and had very strong influence over? Etc.? And also why didn’t you (and others) notice Eliezer’s strategic mistakes, if that has a different or additional answer?
Also, another part of my motivation is still valid and I think it would be interesting to try to answer why didn’t you (and others) just not defer? Not in a rhetorical sense, but what actually caused this?
Ok, sure, that’s a good question, and also off-topic.
Was it age as you hinted earlier?
Yeah obviously. It’s literally impossible to not defer, all you get to pick is which things you invest in undeferring in what order. I’m exceptionally non-deferential but yeah obviously you have to defer about lots of things.
Was it just human nature to want to defer to someone?
Yes it is also human nature to want to defer. E.g. that’s how you stay synched with your tribe on what stuff matters, how to act, etc.
Was it that you were being paid by an organization that Eliezer founded and had very strong influence over? Etc.?
No, I took being paid as more obligation to not defer.
Anyway, I’m banning you from my posts due to grossly negligent reading comprehension.
The grandparent explains why Dai was confused about your authorial intent, and his comment at the top of the thread is sitting at 31 karma in 15 votes, suggesting that other readers found Dai’s engagement valuable. If that’s grossly negligent reading comprehension, then would you prefer to just not have readers? That is, it seems strange to be counting down from “smart commenters interpret my words in the way I want them to be interpreted” rather than up from “no one reads or comments on my work.”
suggesting that other readers found Dai’s engagement valuable
This may not be a valid inference, or your update may be too strong, given that my comment got a strong upvote early or immediately, which caused it to land in the Popular Comments section of the front page, where others may have further upvoted it in a decontextualized way.
It looks like I’m not actually banned yet, but will disengage for now to respect Tsvi’s wishes/feelings. Thought I should correct the record on the above first, as I’m probably the only person who could (due to seeing the strong upvote and the resulting position in Popular Comments).
I have banned you from my posts, but my guess is that you’re still allowed to post on existing comment threads with you involved, or something like. I’m happy for you to comment on anything that the LW interface allows you to comment on. [ETA: actually I hadn’t hit “submit” on the ban; I’ve done that now, so Wei Dai might no longer be able to reply on this post at all.]
Possibly I’ll unban you some time in the future (not that anyone cares too much, I presume). But like, this comment thread is kinda wild from my perspective. My current understanding is that you “went down some line of questioning” based on a misunderstanding, but did not state what your line of questioning was and also ignored anything in my responses that wasn’t furthering your “line of questioning” including stuff that was correcting your misunderstanding. Which is pretty anti-helpful.
Are you wanting to say “I, Wei Dai, am a better strategic thinker on AGI X-derisking than Yudkowsky.”? That’s a perfectly fine thing to say IMO, but of course you should understand that most people (me included) wouldn’t by default have the context to believe that.
It’s not obvious to me that we’re better off than this world, sadly. It seems like one of the main effects was to draw lots of young blood into the field of AI.
That’s plausible, IDK. But are you saying that PROSPECTIVELY the PREDICTABLE-ish effects were bad? Who said “Sure you could tie together a whole bunch of existing epistemological threads, and do a bunch of new thinking, and explain AI danger very clearly and thoroughly, and yeah, you could attract a huge amount of brainpower to try to think clearly about how to derisk that, but then they’ll just all start trying to make AGI. And here’s the reasons I can actually know this.”? There might have been people starting to say this by 2015 or 2018, IDK. But in 2010? 2006?
I think it’s not an impossible call. The fiasco with Roko’s Basilisk (2010) seems like a warning that could have been heeded. It turns out that “freaking out” about something being dangerous and scary makes it salient and exciting, which in turn causes people to fixate on it in ways that are obviously counterproductive. That it becomes a mark of pride to do the dangerous thing without being scathed (as with the Demon core). Even though you warned them about this from the beginning, and in very clear terms.
And even if there was no one able to see this (it’s not like I saw it), it remains a strategic error — reality doesn’t grade on a curve.
Yes, it would be a strategic error in a sense, but it wouldn’t be a strong argument against “Yudkowsky is the best strategic thinker on AGI X-derisking”, which I was given to understand was the topic of this thread. For that specific question, which seemed to be the topic of Wei Dai’s comment, it is graded on a curve. (I don’t actually feel that interested in that question though.)
Etc. I am not sure Eliezer has showcased such skills in his work. He is a brilliant independent researcher and thinker, but not a top tier strategist or leader, as far as I can tell.
Is there someone you’d point to as being a better “strategic thinker on the topic of existential risk from AGI”, as is the topic of discussion in this thread?
Good question. ARE there any A-tier strategists at all on x-risk? I’d nominate Stuart Russel. Hm. Even Yoshua Bengio is arguably also having a larger impact than Eliezer in some critical areas (policy).
For pure strategic competence, Amandeep Singh Gill.
Russell, Bengio, and Tallinn are good but not in the same league as Yudkowsky in terms of strategic thinking about AGI X-derisking. A quick search of Gill doesn’t turn up anything about existential risk but I could very easily have missed it.
Okay, I think I see the confusion. Your phrasing make it seem (to me at least) like Eliezer has had the biggest strategic impact on mitigating x-risk, and arguably also being the most competent there. I would really not be sure of that. But if we talk about strategically dissecting x-risk, without necessarily mitigating it, directly or indirectly, then maybe Eliezer would win. Still would maybe lean towards Stuart.
Gill IS having an impact that de facto mitigates x-risk, whether he uses the term or not. But he is not making people talk about it (without necessarily doing anything about it) as much as Eliezer. In that sense one could argue he isn’t really an x-risk champ.
Ok. If that’s true then yeah, you might a very good strategic thinker about AGI X-risk. Yudkowsky still probably wins, given the evidence I currently have. He’s been going really hard at it for >20 years. You can criticize the writing style of LW, and say how in general he could have been deferred-to more gracefully, and I’m very open to that and somewhat interested in that.
But it seems strange to be counting down from “Yudkowsky-LW-sphere, but even better” rather than up from “no Yudkowsky-LW-sphere”. (Which isn’t to say “well his stuff is really popular so he’s a good strategic thinker”, but rather “actually the Sequences and CFAI and https://intelligence.org/files/AIPosNegFactor.pdf and https://intelligence.org/files/ComplexValues.pdf and https://intelligence.org/files/CognitiveBiases.pdf and https://files.givewell.org/files/labs/AI/IEM.pdf were a huge amount of strategic background; as a consequence of being good strategic background, they shifted many people to working on this”.)
Maybe I’m misunderstanding what you’re saying though / not addressing it. If someone had been building out the conceptual foundations of AGI X-derisking via social victory for >20 years, they’d probably have a strong claim to being the best strategic thinker on AGI X-risk in my book.
I’m not saying it is! You may have misread. (Or maybe I misspoke—if so, sorry, I’m not rereading my post but I can if you think I did say this.) I’m saying that SOME deference is probably unavoidable, BUT there’s a lot of ACTUAL deference (such as the examples I cited involving Yudkowsky!) that is BAD, so we should try to NOT DO THE BAD ONES but in a way that doesn’t NECESSARILY involve “just don’t defer at all”.
No? They’re all really difficult questions. Even being an expert in one of these would be at least a career. I mean, maybe YOU can, but I can’t, and I definitely can’t do so when I’m just a kid starting to think about how to help with X-derisking.
I mean I’m obviously not arguing “don’t seriously investigate the crucial questions in your field for yourself”, or even “don’t completely unwind all your deference about strategy, all the way to the top, using your full power of critique, and start figuring things out actually from scratch”. I’ve explicitly told dozens of relative newcomers (2016--2019, roughly) to AGI X-derisking that they should stop trying so hard to defer, that there are several key dangers of deference, that they should try to become experts in key questions even if that would take a lot of effort, that the only way to be a really serious X-derisker is to start your work on planting questions about key elements, etc. My point is that
{people, groups, funders, organizations, fields} do in fact end up deferring, and
probably quite a lot of this is unavoidable, or at least unavoidable for now / given what we know about how to do group rationality,
but also deference has a ton of bad effects, so
we should figure out how to have less of those bad effects—and not just via “defer less”.
Maybe we should distinguish between being good at thinking about / explaining strategic background, versus being actually good at strategy per se, e.g. picking high-level directions or judging overall approaches? I think he’s good at the former, but people mistakenly deferred to him too much on the latter.
It would make sense that one could be good at one of these and less good at the other, as they require somewhat different skills. In particular I think the former does not require one to be able to think of all of the crucial considerations, or have overall good judgment after taking them all into consideration.
So Eliezer could become experts in all of them starting from scratch, but you couldn’t even though you could build upon his writings and other people’s? What was/is your theory of why he is so much above you in this regard? (“Being a kid” seems a red herring since Eliezer was pretty young when he did much of his strategic thinking.)
I agree and I said as much, but this also seems like a non sequitur if you’re just trying to say he’s not the best strategic thinker. Someone can be the best and also be “overrated” (or rather, overly deferred-to). I’m saying he is both. The “thinking about / explaining strategic background” is strong evidence of actually being good at strategy. Separately, Yudkowsky is the biggest creator of our chances of social victory, via LW/X-derisking sphere! (I’m not super confident of that, but pretty confident? Any other candidates?) So it’s a bit hard to argue that he didn’t pick that strategic route as well as the technical route! You can’t grade Yudkowsky on his own special curve just for all his various attempts at X-derisking, and then separately grade everyone else.
Ok. I mean, true. I guess someone could suggest alternative candidates, though I’m noticing IDK why to care much about this question.
(I continue to have a sense that you’re misunderstanding what I’m saying, as described earlier, and also not sure what’s interesting about this topic. My bid would be, if there’s something here that seems interesting or important to you, that you would say a bit about what that is and why, as a way of recentering. It seems like you’re trying to drill down into particulars, but you keep being like “So why do you think X?” and I’m like “I don’t think X.”.)
By saying that he was the best strategic thinker, it seems like you’re trying to justify deferring to him on strategy (why not do that if he is actually the best), while also trying to figure out how to defer “gracefully”, whereas I’m questioning whether it made sense to defer to him at all, when you could have taken into account his (and other people’s) writings about strategic background, and then looked for other important considerations and formed your own judgments.
Another thing that interests me is that several of his high-level strategic judgments seemed wrong or questionable to me at the time (as listed in my OP, and I can look up my old posts/comments if that would help), and if it didn’t seem that way to others, I want to understand why. Was Eliezer actually right, given what we knew at the time? Did it require a rare strategic mind to notice his mistakes? Or was it a halo effect, or the effect of Eliezer writing too confidently, or something else, that caused others to have a cognitive blind spot about this?
No. You’re totally hallucinating this and also not updating when I’m repeatedly telling you no. It’s also the opposite of the point hammered in by the OP. My entire post is complaining about problems with deferring, and it links a prior post I wrote laying out these problems in detail, and I linked that essay to you again, and I linked several other writings explaining more how I’m against deferring and tell people not to defer repeatedly and in different ways. I bring up Eliezer to say “Look, we deferred to the best strategic thinker, and even though he’s the best strategic thinker, deferring was STILL really bad.”. Since I’ve described how deferring is really bad in several other places, here in THIS post I’m asking, given that we’re going to defer despite its costs, and given that to some extent at the end of the day we do have to defer on many things, what can we do to alleviate some of those problems?
And then you’re like “Ha. Why not just not defer?”.
A bit blackpilling re/ LW voters. So cowardly, and so wrong.
Ok, it looks like part of my motivation for going down this line of thought was based on a misunderstanding. But to be fair, in this post after you asked “What should we have done instead?” with regard to deferring to Eliezer, you didn’t clearly say “we should have not deferred or deferred less”, but instead wrote “We don’t have to stop deferring, to avoid this correlated failure. We just have to say that we’re deferring.” Given that this is a case where many people could have and should have not deferred, this just seems like a bad example to illustrate “given that to some extent at the end of the day we do have to defer on many things, what can we do to alleviate some of those problems?”, leading to the kind of confusion I had.
Also, another part of my motivation is still valid and I think it would be interesting to try to answer why didn’t you (and others) just not defer? Not in a rhetorical sense, but what actually caused this? Was it age as you hinted earlier? Was it just human nature to want to defer to someone? Was it that you were being paid by an organization that Eliezer founded and had very strong influence over? Etc.? And also why didn’t you (and others) notice Eliezer’s strategic mistakes, if that has a different or additional answer?
Ok, sure, that’s a good question, and also off-topic.
Yeah obviously. It’s literally impossible to not defer, all you get to pick is which things you invest in undeferring in what order. I’m exceptionally non-deferential but yeah obviously you have to defer about lots of things.
Yes it is also human nature to want to defer. E.g. that’s how you stay synched with your tribe on what stuff matters, how to act, etc.
No, I took being paid as more obligation to not defer.
Anyway, I’m banning you from my posts due to grossly negligent reading comprehension.
The grandparent explains why Dai was confused about your authorial intent, and his comment at the top of the thread is sitting at 31 karma in 15 votes, suggesting that other readers found Dai’s engagement valuable. If that’s grossly negligent reading comprehension, then would you prefer to just not have readers? That is, it seems strange to be counting down from “smart commenters interpret my words in the way I want them to be interpreted” rather than up from “no one reads or comments on my work.”
This may not be a valid inference, or your update may be too strong, given that my comment got a strong upvote early or immediately, which caused it to land in the Popular Comments section of the front page, where others may have further upvoted it in a decontextualized way.
It looks like I’m not actually banned yet, but will disengage for now to respect Tsvi’s wishes/feelings. Thought I should correct the record on the above first, as I’m probably the only person who could (due to seeing the strong upvote and the resulting position in Popular Comments).
I have banned you from my posts, but my guess is that you’re still allowed to post on existing comment threads with you involved, or something like. I’m happy for you to comment on anything that the LW interface allows you to comment on. [ETA: actually I hadn’t hit “submit” on the ban; I’ve done that now, so Wei Dai might no longer be able to reply on this post at all.]
Possibly I’ll unban you some time in the future (not that anyone cares too much, I presume). But like, this comment thread is kinda wild from my perspective. My current understanding is that you “went down some line of questioning” based on a misunderstanding, but did not state what your line of questioning was and also ignored anything in my responses that wasn’t furthering your “line of questioning” including stuff that was correcting your misunderstanding. Which is pretty anti-helpful.
Did you read the whole comment thread?
Are you wanting to say “I, Wei Dai, am a better strategic thinker on AGI X-derisking than Yudkowsky.”? That’s a perfectly fine thing to say IMO, but of course you should understand that most people (me included) wouldn’t by default have the context to believe that.
It’s not obvious to me that we’re better off than this world, sadly. It seems like one of the main effects was to draw lots of young blood into the field of AI.
That’s plausible, IDK. But are you saying that PROSPECTIVELY the PREDICTABLE-ish effects were bad? Who said “Sure you could tie together a whole bunch of existing epistemological threads, and do a bunch of new thinking, and explain AI danger very clearly and thoroughly, and yeah, you could attract a huge amount of brainpower to try to think clearly about how to derisk that, but then they’ll just all start trying to make AGI. And here’s the reasons I can actually know this.”? There might have been people starting to say this by 2015 or 2018, IDK. But in 2010? 2006?
I think it’s not an impossible call. The fiasco with Roko’s Basilisk (2010) seems like a warning that could have been heeded. It turns out that “freaking out” about something being dangerous and scary makes it salient and exciting, which in turn causes people to fixate on it in ways that are obviously counterproductive. That it becomes a mark of pride to do the dangerous thing without being scathed (as with the Demon core). Even though you warned them about this from the beginning, and in very clear terms.
And even if there was no one able to see this (it’s not like I saw it), it remains a strategic error — reality doesn’t grade on a curve.
Yes, it would be a strategic error in a sense, but it wouldn’t be a strong argument against “Yudkowsky is the best strategic thinker on AGI X-derisking”, which I was given to understand was the topic of this thread. For that specific question, which seemed to be the topic of Wei Dai’s comment, it is graded on a curve. (I don’t actually feel that interested in that question though.)
The question doesn’t make sense. It’s not possible to judge conclusively whether something is good or bad ahead of time… only after the fact.
Because real world actions and outcomes are what counts, not what is claimed verbally or in writing.
Being a good strategist is about things like
A) Understanding and probing the opposition/problem well
B) Coordinating your resources
C) Understanding rules and principles governing the nature of the game (operational constraints)
D) Creative problem solving + tactics
E) Knowing strategic principles (e.g., seizing initiative, pre-empting the opposition, leveraging commitment vulnerabilities, etc.)
F) Managing asymmetric information (my specialty)
G) Avoiding risky overcommitment
Etc. I am not sure Eliezer has showcased such skills in his work. He is a brilliant independent researcher and thinker, but not a top tier strategist or leader, as far as I can tell.
Is there someone you’d point to as being a better “strategic thinker on the topic of existential risk from AGI”, as is the topic of discussion in this thread?
Good question. ARE there any A-tier strategists at all on x-risk? I’d nominate Stuart Russel. Hm. Even Yoshua Bengio is arguably also having a larger impact than Eliezer in some critical areas (policy).
For pure strategic competence, Amandeep Singh Gill.
Jaan Tallin. Maybe even Xue Lan.
Russell, Bengio, and Tallinn are good but not in the same league as Yudkowsky in terms of strategic thinking about AGI X-derisking. A quick search of Gill doesn’t turn up anything about existential risk but I could very easily have missed it.
Okay, I think I see the confusion. Your phrasing make it seem (to me at least) like Eliezer has had the biggest strategic impact on mitigating x-risk, and arguably also being the most competent there. I would really not be sure of that. But if we talk about strategically dissecting x-risk, without necessarily mitigating it, directly or indirectly, then maybe Eliezer would win. Still would maybe lean towards Stuart.
Gill IS having an impact that de facto mitigates x-risk, whether he uses the term or not. But he is not making people talk about it (without necessarily doing anything about it) as much as Eliezer. In that sense one could argue he isn’t really an x-risk champ.