Dangerous capabilities well short of superintelligence are followed by overwhelmingly catastrophic capabilities 20 years later. But superintelligence being impossible, or its sudden emergence (in a matter of years) being impossible, is a position that makes whatever happens 20 years after a more modest milestone less relevant, because whatever happens a few years down the line (such as the state of alignment and control) is shaped by what’s done before that, and there’s time to figure things out.
Gradual disempowerment is a relevant argument within that framing. Disputing the framing involves arguments that sudden superintelligence is possible, or that eventual superintelligence is a phase change that preceding work won’t prepare for (rather than an arbitrary point in a gradual process not distinct from all other points). Disputing the framing is more difficult, but accepting this framing makes people much more tolerant of continuing unbounded development of increasingly capable AI at the pace that the technology itself is asking for. So these two models of AI danger are not very aligned on policy.
Dangerous capabilities well short of superintelligence are followed by overwhelmingly catastrophic capabilities 20 years later. But superintelligence being impossible, or its sudden emergence (in a matter of years) being impossible, is a position that makes whatever happens 20 years after a more modest milestone less relevant, because whatever happens a few years down the line (such as the state of alignment and control) is shaped by what’s done before that, and there’s time to figure things out.
Gradual disempowerment is a relevant argument within that framing. Disputing the framing involves arguments that sudden superintelligence is possible, or that eventual superintelligence is a phase change that preceding work won’t prepare for (rather than an arbitrary point in a gradual process not distinct from all other points). Disputing the framing is more difficult, but accepting this framing makes people much more tolerant of continuing unbounded development of increasingly capable AI at the pace that the technology itself is asking for. So these two models of AI danger are not very aligned on policy.