The main (/most salient) disagreement I can see at the moment is the authors’ expectations of value-strangeness and maximizeriness of superintelligence; or rather, I am much more uncertain about this.
The metaphor I use is Russian roulette, where you have a revolver with 6 chambers, 1 loaded with a bullet. The gun is pointed at the head of humanity. We spin, and we pull the trigger.
Various AI lab leaders have stated that we are playing with 1 (or 1.5) chambers loaded. Dario Amodei at Anthropic is the most pessimistic, I believe, giving us a 25% chance that AI kills us?
Eliezer and Nate suspect that we are playing with approximately 6 out of 6 chambers loaded.
Personally, I have a long and complicated argument that we may only be playing with 4 or 5 chambers loaded!
You might feel that we’re only playing with 2 or 3 chambers loaded.
The wise solution here is not to worry about precisely how many chambers are loaded. Even 1 in 6 odds are terrible! The wise solution is to stop playing Russian roulette entirely.
The metaphor I use is Russian roulette, where you have a revolver with 6 chambers, 1 loaded with a bullet. The gun is pointed at the head of humanity. We spin, and we pull the trigger.
Various AI lab leaders have stated that we are playing with 1 (or 1.5) chambers loaded. Dario Amodei at Anthropic is the most pessimistic, I believe, giving us a 25% chance that AI kills us?
Eliezer and Nate suspect that we are playing with approximately 6 out of 6 chambers loaded.
Personally, I have a long and complicated argument that we may only be playing with 4 or 5 chambers loaded!
You might feel that we’re only playing with 2 or 3 chambers loaded.
The wise solution here is not to worry about precisely how many chambers are loaded. Even 1 in 6 odds are terrible! The wise solution is to stop playing Russian roulette entirely.