I feel like someone should be arguing the other side, and no one else has stepped up, so I guess I’ll have a go. :-P This comment will be like 75% my honest opinions and 25% devil’s advocate. Note that I wasn’t around at the time, sorry for any misunderstandings.
I think your OP does some conflation of (1) “Eliezer was trying to build FAI” with (2) “Eliezer was loudly raising the salience of ASI risk (and thus incidentally the salience ASI in general and how big a deal ASI is), along with related community-building etc.”. But these are two somewhat separate decisions that Eliezer made.
For example, you summarize an article as claiming “Shane Legg was introduced to the idea of AGI through a 2000 talk by Eliezer, and then co-founded DM in 2010 (following an introduction by Eliezer to investor Peter Thiel…)” Those seem to be (2) not (1), right? Well, I guess the 2000 talk is neither (1) nor (2) (Eliezer didn’t yet buy AI risk in 2000), but more generally, MIRI could have directly tried to build FAI without Eliezer giving talks and introducing people, and conversely Eliezer could have given talks and introduced people without MIRI directly trying to build FAI.
So I’m skeptical that (1) (per se) contributed nontrivially to accelerating the race to ASI. For example, I’d be surprised if Demis founded DeepMind partly because he expected MIRI to successfully build ASI, and wanted to beat them to it. My guess is the opposite: Demis expected MIRI to fail to build powerful AI at all, and saw it as a safety outfit not doing anything relevant from a capabilities perspective. After all, DeepMind pursued a very different technical research direction.
On the other hand, I think there’s at least a strong prima facie case that (2) shortened timelines, which is bad. On the other hand, (2) helped build the field of alignment, which is good. So overall, how do we feel about (2)? I dunno. You yourself seemed to be endorsing (2) in 2004 (“…putting more resources into highlighting the dangers of unsafe AI…”). For my part, I have mixed feelings, but by default I tend to be in favor of (2) for kinda deontological reasons (if people’s lives are at risk, it’s by default good to tell them). But (2) is off-topic anyway; the thing you’re re-litigating is (1), right?
OK next, let’s talk about intelligence augmentation (IA), per your other comment proposal: “Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers).”
There are geniuses today, and they mostly don’t work on FAI. Indeed, I think existing geniuses have done more to advance UFAI than FAI. I think the obvious zeroth-order model is that a world with more geniuses would just have all aspects of intellectual progress advance more rapidly, including both capabilities and alignment. So we’d wind up in the same place (i.e. probably doom), just sooner.
What would be some refinements on that zeroth-order model that make IA seem good?
One possible argument: “Maybe there’s a kind of ‘uncanny valley’ of ‘smart enough to advance UFAI but not smart enough to realize that it’s a bad idea’. And IA gets us a bunch of people who are all the way across the valley”. But uncanny-valley-theory doesn’t seem to fit the empirical data, from my perspective. When I look around, “raw intelligence” vs “awareness of AI risk and tendency to leverage that understanding into good decisions” seem somewhat orthogonal to me, as much as I want to flatter myself by thinking otherwise.
Another possible argument: “Maybe it’s not about the tippy-top of the intelligence distribution doing research, but rather the middle of the distribution, e.g. executives and other decisionmakers making terrible decisions”. But realistically we’re not going to be creating tens of millions of geniuses before ASI, enough to really shift the overall population distribution. Note that there are already millions of people smarter than, say, Donald Trump, but they’re not in charge of the USA, and he is. Ditto Sam Altman, etc. There are structural reasons for that, and those reasons won’t go away when thousands of super-geniuses appear on the scene.
Another possible argument: “If awareness of x-risk, good decision-making, etc., relies partly on something besides pure intelligence, e.g. personality … well OK fine, we can do embryo-selection etc. on both intelligence and (that aspect of) personality.” I’m a bit more sympathetic to this, but the science to do that doesn’t exist yet (details). (I might work on it at some point.)
So that’s the IA possibility, which I don’t think changes the overall picture much. And now I’ll circle back to your five-point list. I already addressed the fifth. I claim that the other four are really bad things about our situation that we have basically no hope of avoiding. On my models, ASI doesn’t require much compute, just ideas, and people are already making progress developing those ideas. On the margin we can and should try to delay the inevitable, but ultimately someone is going to build it (and then probably everyone dies). If it gets built in a more democratic and bureaucratic way, like by some kind of CERN for AI, then there are some nice things to say about that from the perspective of ethical procedure, but I don’t expect a better actual outcome than MIRI-of-2010 building it. Probably much worse. The project will still be rolling its own metaethics (at best!), the project will still be ignoring illegible safety problems, the project will almost definitely still involve key personnel winding up in a position to grab world-altering power, and the project will probably still be subjecting the whole world to dire risk by doing something that most of the world doesn’t want them to do. (Or if they pause to wait for global consensus, then someone else will build it in the meantime.) We still have all those problems, because those problems are unavoidable, alas.
I feel like someone should be arguing the other side, and no one else has stepped up, so I guess I’ll have a go. :-P This comment will be like 75% my honest opinions and 25% devil’s advocate. Note that I wasn’t around at the time, sorry for any misunderstandings.
I think your OP does some conflation of (1) “Eliezer was trying to build FAI” with (2) “Eliezer was loudly raising the salience of ASI risk (and thus incidentally the salience ASI in general and how big a deal ASI is), along with related community-building etc.”. But these are two somewhat separate decisions that Eliezer made.
For example, you summarize an article as claiming “Shane Legg was introduced to the idea of AGI through a 2000 talk by Eliezer, and then co-founded DM in 2010 (following an introduction by Eliezer to investor Peter Thiel…)” Those seem to be (2) not (1), right? Well, I guess the 2000 talk is neither (1) nor (2) (Eliezer didn’t yet buy AI risk in 2000), but more generally, MIRI could have directly tried to build FAI without Eliezer giving talks and introducing people, and conversely Eliezer could have given talks and introduced people without MIRI directly trying to build FAI.
So I’m skeptical that (1) (per se) contributed nontrivially to accelerating the race to ASI. For example, I’d be surprised if Demis founded DeepMind partly because he expected MIRI to successfully build ASI, and wanted to beat them to it. My guess is the opposite: Demis expected MIRI to fail to build powerful AI at all, and saw it as a safety outfit not doing anything relevant from a capabilities perspective. After all, DeepMind pursued a very different technical research direction.
On the other hand, I think there’s at least a strong prima facie case that (2) shortened timelines, which is bad. On the other hand, (2) helped build the field of alignment, which is good. So overall, how do we feel about (2)? I dunno. You yourself seemed to be endorsing (2) in 2004 (“…putting more resources into highlighting the dangers of unsafe AI…”). For my part, I have mixed feelings, but by default I tend to be in favor of (2) for kinda deontological reasons (if people’s lives are at risk, it’s by default good to tell them). But (2) is off-topic anyway; the thing you’re re-litigating is (1), right?
OK next, let’s talk about intelligence augmentation (IA), per your other comment proposal: “Given that there are known ways to significantly increase the number of geniuses (i.e., von Neumann level, or IQ 180 and greater), by cloning or embryo selection, an obvious alternative Singularity strategy is to invest directly or indirectly in these technologies, and to try to mitigate existential risks (for example by attempting to delay all significant AI efforts) until they mature and bear fruit (in the form of adult genius-level FAI researchers).”
There are geniuses today, and they mostly don’t work on FAI. Indeed, I think existing geniuses have done more to advance UFAI than FAI. I think the obvious zeroth-order model is that a world with more geniuses would just have all aspects of intellectual progress advance more rapidly, including both capabilities and alignment. So we’d wind up in the same place (i.e. probably doom), just sooner.
What would be some refinements on that zeroth-order model that make IA seem good?
One possible argument: “Maybe there’s a kind of ‘uncanny valley’ of ‘smart enough to advance UFAI but not smart enough to realize that it’s a bad idea’. And IA gets us a bunch of people who are all the way across the valley”. But uncanny-valley-theory doesn’t seem to fit the empirical data, from my perspective. When I look around, “raw intelligence” vs “awareness of AI risk and tendency to leverage that understanding into good decisions” seem somewhat orthogonal to me, as much as I want to flatter myself by thinking otherwise.
Another possible argument: “Maybe it’s not about the tippy-top of the intelligence distribution doing research, but rather the middle of the distribution, e.g. executives and other decisionmakers making terrible decisions”. But realistically we’re not going to be creating tens of millions of geniuses before ASI, enough to really shift the overall population distribution. Note that there are already millions of people smarter than, say, Donald Trump, but they’re not in charge of the USA, and he is. Ditto Sam Altman, etc. There are structural reasons for that, and those reasons won’t go away when thousands of super-geniuses appear on the scene.
Another possible argument: “If awareness of x-risk, good decision-making, etc., relies partly on something besides pure intelligence, e.g. personality … well OK fine, we can do embryo-selection etc. on both intelligence and (that aspect of) personality.” I’m a bit more sympathetic to this, but the science to do that doesn’t exist yet (details). (I might work on it at some point.)
So that’s the IA possibility, which I don’t think changes the overall picture much. And now I’ll circle back to your five-point list. I already addressed the fifth. I claim that the other four are really bad things about our situation that we have basically no hope of avoiding. On my models, ASI doesn’t require much compute, just ideas, and people are already making progress developing those ideas. On the margin we can and should try to delay the inevitable, but ultimately someone is going to build it (and then probably everyone dies). If it gets built in a more democratic and bureaucratic way, like by some kind of CERN for AI, then there are some nice things to say about that from the perspective of ethical procedure, but I don’t expect a better actual outcome than MIRI-of-2010 building it. Probably much worse. The project will still be rolling its own metaethics (at best!), the project will still be ignoring illegible safety problems, the project will almost definitely still involve key personnel winding up in a position to grab world-altering power, and the project will probably still be subjecting the whole world to dire risk by doing something that most of the world doesn’t want them to do. (Or if they pause to wait for global consensus, then someone else will build it in the meantime.) We still have all those problems, because those problems are unavoidable, alas.