(Hopefully it’s not rude to state my personal impression of Eliezer as a thinker. I think he’s enough of a public figure that it’s acceptable for me to comment on it. I’d like to note that I have benefited in many important ways from Eliezer’s writing and ideas, and I’ve generally enjoyed interacting with him in person, and I’m sad that as a result of some of our disagreements our interactions are tense.)
Yeah, I agree that there’s no one who Pareto dominates Eliezer at his top four most exceptional traits. (Which I guess I’d say are: taking important weird ideas seriously, writing compelling/moving/insightful fiction (for a certain audience), writing compelling/evocative/inspiring stuff about how humans should relate to rationality (for a certain audience), being broadly knowledgeable and having clever insights about many different fields.)
(I don’t think that he’s particularly good at thinking about AI; at the very least he is nowhere near as exceptional as he is at those other things.)
I’m not trying to disagree with you. I’m just going to ruminate unstructuredly a little on this:
I know a reasonable number of exceptional people. I am involved in a bunch of conversations about what fairly special people should do. In my experience, when you’re considering two people who might try to achieve a particular goal, it’s usually the case that each has some big advantages over the other in terms of personal capabilities. So, they naturally try to approach it fairly differently. We can think about this in the case where you are hiring CEOs for a project or speculating about what will happen when companies headed by different CEOs compete.
For example, consider the differences between Sam Altman and Dario Amodei (I don’t know either that well, nor do I understand the internal workings of OpenAI/Anthropic, so I’m sort of speculating here):
Dario, unlike Sam, is a good ML researcher. This means that Sam needs to depend more on technical judgment from other people.
Sam had way more connections in Silicon Valley tech, at least when Anthropic was founded.
Dario has lots of connections to the EA community and was able to hire a bunch of EAs.
Sam is much more suave in a certain way than Dario is. This benefits each for different audiences.
Both of them have done pretty well for themselves in similar roles.
As a CEO, it does feel pretty interesting how non-interchangeable most people are. And it’s interesting how in a lot of cases, it’s possible to compensate for one weakness with a strength that seems almost unrelated.
If Eliezer had never been around, my guess is that the situation around AI safety would be somewhat but not incredibly different (though probably overall substantially worse):
Nick Bostrom and Carl Shulman and friends were talking about all this stuff,
Shulman and Holden Karnofsky would have met and talked about AI risk.
I’m pretty sure Paul Christiano would have run across all this and started thinking about it, though perhaps more slowly? He might have tried harder to write for a public audience or get other people to if Less Wrong didn’t already exist.
The early effective altruists would have run across these ideas and been persuaded by them, though somewhat more slowly?
I’m not sure whether more or less EA community building would have happened 2016-2020. It would have been less obvious that community building efforts could work in principle, but less of the low-hanging fruit would have been plucked.
EA idea-spreading work would have been more centered around the kinds of ideas that non-Eliezer people are drawn to.
My guess is that the quality of ideas in the AI safety space would probably be better at this point?
Maybe a relevant underlying belief of mine is that Eliezer is very good at coming up with terms for things and articulating why something is important, and he also had the important strength of realizing how important AI was before that many other people had done so. But I don’t think his thinking about AI is actually very good on the merits. Most of the ideas he’s spread were originally substantially proposed by other people; his contribution was IMO mostly his reframings and popularizations. And I don’t think his most original ideas actually look that good. (See here for an AI summary.)
Without HPMOR and his sequences, many probably wouldn’t become interested in rationality (or the way it’s presented in them) quite as quickly or at all. But then, without his fascination of certain controversial ideas (like focusing on AI takeoff/risk that depend on overly sci-fi-like threat models—like grey goo, virus that make all humans just drop dead instantly, endless intelligence self-improvement etc that we don’t know to be possible, as opposed to more realistic and verifiable threat models like “normal” pandemics, cybersecurity, military robots and normal economic/physical efficiency etc; and focusing too much on moral absolutism, and either believing AGI will have some universal “correct” ethics or we should try to ensure AGI have such ethics as the main or only path to safe AI; or various weird obsessions like the idea of legalizing r*pe etc that might have alienated many women and other readers), AI safety and rationality groups in general may have been seen as less fringe and more reasonable.
various weird obsessions like the idea of legalizing r*pe etc that might have alienated many women and other readers
Sidenote: I object to calling this a weird obsession. This was a minor-to-medium plot point in one science fiction story that he wrote, and (to my knowledge) has never advocated for or even discussed beyond the relevance to the story. I don’t think that’s an obsession.
The early effective altruists would have run across these ideas and been persuaded by them, though somewhat more slowly?
I think I doubt this particular point. That EA embraced AI risk (to the extent that it did) seem to me like a fairly contingent historical fact due to LessWrong being one of the three original proto-communities of EA.
I think early EA could have grown into several very different scenes/movements/cultures/communities, in both from and content. That we would have broadly bought into AI risk as an important cause area doesn’t seem overdetermined to me.
(Hopefully it’s not rude to state my personal impression of Eliezer as a thinker. I think he’s enough of a public figure that it’s acceptable for me to comment on it. I’d like to note that I have benefited in many important ways from Eliezer’s writing and ideas, and I’ve generally enjoyed interacting with him in person, and I’m sad that as a result of some of our disagreements our interactions are tense.)
Yeah, I agree that there’s no one who Pareto dominates Eliezer at his top four most exceptional traits. (Which I guess I’d say are: taking important weird ideas seriously, writing compelling/moving/insightful fiction (for a certain audience), writing compelling/evocative/inspiring stuff about how humans should relate to rationality (for a certain audience), being broadly knowledgeable and having clever insights about many different fields.)
(I don’t think that he’s particularly good at thinking about AI; at the very least he is nowhere near as exceptional as he is at those other things.)
I’m not trying to disagree with you. I’m just going to ruminate unstructuredly a little on this:
I know a reasonable number of exceptional people. I am involved in a bunch of conversations about what fairly special people should do. In my experience, when you’re considering two people who might try to achieve a particular goal, it’s usually the case that each has some big advantages over the other in terms of personal capabilities. So, they naturally try to approach it fairly differently. We can think about this in the case where you are hiring CEOs for a project or speculating about what will happen when companies headed by different CEOs compete.
For example, consider the differences between Sam Altman and Dario Amodei (I don’t know either that well, nor do I understand the internal workings of OpenAI/Anthropic, so I’m sort of speculating here):
Dario, unlike Sam, is a good ML researcher. This means that Sam needs to depend more on technical judgment from other people.
Sam had way more connections in Silicon Valley tech, at least when Anthropic was founded.
Dario has lots of connections to the EA community and was able to hire a bunch of EAs.
Sam is much more suave in a certain way than Dario is. This benefits each for different audiences.
Both of them have done pretty well for themselves in similar roles.
As a CEO, it does feel pretty interesting how non-interchangeable most people are. And it’s interesting how in a lot of cases, it’s possible to compensate for one weakness with a strength that seems almost unrelated.
If Eliezer had never been around, my guess is that the situation around AI safety would be somewhat but not incredibly different (though probably overall substantially worse):
Nick Bostrom and Carl Shulman and friends were talking about all this stuff,
Shulman and Holden Karnofsky would have met and talked about AI risk.
I’m pretty sure Paul Christiano would have run across all this and started thinking about it, though perhaps more slowly? He might have tried harder to write for a public audience or get other people to if Less Wrong didn’t already exist.
The early effective altruists would have run across these ideas and been persuaded by them, though somewhat more slowly?
I’m not sure whether more or less EA community building would have happened 2016-2020. It would have been less obvious that community building efforts could work in principle, but less of the low-hanging fruit would have been plucked.
EA idea-spreading work would have been more centered around the kinds of ideas that non-Eliezer people are drawn to.
My guess is that the quality of ideas in the AI safety space would probably be better at this point?
Maybe a relevant underlying belief of mine is that Eliezer is very good at coming up with terms for things and articulating why something is important, and he also had the important strength of realizing how important AI was before that many other people had done so. But I don’t think his thinking about AI is actually very good on the merits. Most of the ideas he’s spread were originally substantially proposed by other people; his contribution was IMO mostly his reframings and popularizations. And I don’t think his most original ideas actually look that good. (See here for an AI summary.)
Without HPMOR and his sequences, many probably wouldn’t become interested in rationality (or the way it’s presented in them) quite as quickly or at all. But then, without his fascination of certain controversial ideas (like focusing on AI takeoff/risk that depend on overly sci-fi-like threat models—like grey goo, virus that make all humans just drop dead instantly, endless intelligence self-improvement etc that we don’t know to be possible, as opposed to more realistic and verifiable threat models like “normal” pandemics, cybersecurity, military robots and normal economic/physical efficiency etc; and focusing too much on moral absolutism, and either believing AGI will have some universal “correct” ethics or we should try to ensure AGI have such ethics as the main or only path to safe AI; or various weird obsessions like the idea of legalizing r*pe etc that might have alienated many women and other readers), AI safety and rationality groups in general may have been seen as less fringe and more reasonable.
Sidenote: I object to calling this a weird obsession. This was a minor-to-medium plot point in one science fiction story that he wrote, and (to my knowledge) has never advocated for or even discussed beyond the relevance to the story. I don’t think that’s an obsession.
I think I doubt this particular point. That EA embraced AI risk (to the extent that it did) seem to me like a fairly contingent historical fact due to LessWrong being one of the three original proto-communities of EA.
I think early EA could have grown into several very different scenes/movements/cultures/communities, in both from and content. That we would have broadly bought into AI risk as an important cause area doesn’t seem overdetermined to me.