First things first, I wholeheartedly endorse the main actionable conclusion: Ban unrestrained progress on AI that can kill us all.
I broadly think Eliezer and Nate did a good job communicating what’s so difficult about the task of building a thing that is more intelligent than all of humanity combined and shaped appropriately so as to help us, rather than have a volition of its own that runs contrary to ours.[1]
The main (/most salient) disagreement I can see at the moment is the authors’ expectations of value-strangeness and maximizeriness of superintelligence; or rather, I am much more uncertain about this. However, this detail is not relevant for the desirability of the post-ASI future, conditional on business-close-to-as-usual and therefore not relevant for whether the ban is good.
(Also, not sure about their choice of some stories/parables, but that’s a minor issue as well.)
I liked the comparison with the Allies winning against the Axis in WWII, which, at least in resource/monetary terms, must have costed much more than it would cost to implement the ban. The things we’re missing at the moment are awareness of the issue, pulling ourselves together, and collective steam.
The main (/most salient) disagreement I can see at the moment is the authors’ expectations of value-strangeness and maximizeriness of superintelligence; or rather, I am much more uncertain about this.
The metaphor I use is Russian roulette, where you have a revolver with 6 chambers, 1 loaded with a bullet. The gun is pointed at the head of humanity. We spin, and we pull the trigger.
Various AI lab leaders have stated that we are playing with 1 (or 1.5) chambers loaded. Dario Amodei at Anthropic is the most pessimistic, I believe, giving us a 25% chance that AI kills us?
Eliezer and Nate suspect that we are playing with approximately 6 out of 6 chambers loaded.
Personally, I have a long and complicated argument that we may only be playing with 4 or 5 chambers loaded!
You might feel that we’re only playing with 2 or 3 chambers loaded.
The wise solution here is not to worry about precisely how many chambers are loaded. Even 1 in 6 odds are terrible! The wise solution is to stop playing Russian roulette entirely.
On IABIED
First things first, I wholeheartedly endorse the main actionable conclusion: Ban unrestrained progress on AI that can kill us all.
I broadly think Eliezer and Nate did a good job communicating what’s so difficult about the task of building a thing that is more intelligent than all of humanity combined and shaped appropriately so as to help us, rather than have a volition of its own that runs contrary to ours.[1]
The main (/most salient) disagreement I can see at the moment is the authors’ expectations of value-strangeness and maximizeriness of superintelligence; or rather, I am much more uncertain about this. However, this detail is not relevant for the desirability of the post-ASI future, conditional on business-close-to-as-usual and therefore not relevant for whether the ban is good.
(Also, not sure about their choice of some stories/parables, but that’s a minor issue as well.)
I liked the comparison with the Allies winning against the Axis in WWII, which, at least in resource/monetary terms, must have costed much more than it would cost to implement the ban. The things we’re missing at the moment are awareness of the issue, pulling ourselves together, and collective steam.
Whatever that means, cf the problems of CEV and idealized values.
The metaphor I use is Russian roulette, where you have a revolver with 6 chambers, 1 loaded with a bullet. The gun is pointed at the head of humanity. We spin, and we pull the trigger.
Various AI lab leaders have stated that we are playing with 1 (or 1.5) chambers loaded. Dario Amodei at Anthropic is the most pessimistic, I believe, giving us a 25% chance that AI kills us?
Eliezer and Nate suspect that we are playing with approximately 6 out of 6 chambers loaded.
Personally, I have a long and complicated argument that we may only be playing with 4 or 5 chambers loaded!
You might feel that we’re only playing with 2 or 3 chambers loaded.
The wise solution here is not to worry about precisely how many chambers are loaded. Even 1 in 6 odds are terrible! The wise solution is to stop playing Russian roulette entirely.