This post is even-handed and well-reasoned, and explains the issues involved well. The strategy-stealing assumption seems important, as a lot of predictions are inherently relying on it either being essentially true, or effectively false, and I think the assumption will often effectively be a crux in those disagreements, for reasons the post illustrates well.
The weird thing is that Paul ends the post saying he thinks the assumption is mostly true, whereas I thought the post was persuasive that the assumption is mostly false. The post illustrates that the unaligned force is likely to have many strategic and tactical advantages over aligned forces, that should allow the unaligned force to, at a minimum, ‘punch above its weight’ in various ways even under close-to-ideal conditions. And after the events of 2020, and my resulting updates to my model of humans, I’m highly skeptical that we’ll get close to ideal.
I would be surprised if this were a key crux for more than a few folks.
My intuition is that people’s cruxes are much more likely to be things like “AI develop will be slow so society will have time to adapt”, “many more good guys than bad guys” or “power concentration is sufficiently terrifying that we have to bet on the offense-defense balance being favourable”.
This post is even-handed and well-reasoned, and explains the issues involved well. The strategy-stealing assumption seems important, as a lot of predictions are inherently relying on it either being essentially true, or effectively false, and I think the assumption will often effectively be a crux in those disagreements, for reasons the post illustrates well.
The weird thing is that Paul ends the post saying he thinks the assumption is mostly true, whereas I thought the post was persuasive that the assumption is mostly false. The post illustrates that the unaligned force is likely to have many strategic and tactical advantages over aligned forces, that should allow the unaligned force to, at a minimum, ‘punch above its weight’ in various ways even under close-to-ideal conditions. And after the events of 2020, and my resulting updates to my model of humans, I’m highly skeptical that we’ll get close to ideal.
Either way, I’m happy to include this.
I would be surprised if this were a key crux for more than a few folks.
My intuition is that people’s cruxes are much more likely to be things like “AI develop will be slow so society will have time to adapt”, “many more good guys than bad guys” or “power concentration is sufficiently terrifying that we have to bet on the offense-defense balance being favourable”.