Fast takeoff is not takeover is not extinction. For example gradual disempowerment without a fast takeoff leads to takeover, which may lead to either extinction or permanent disempowerment depending on values of AIs.
I think it’s quite plausible that AGIs merely de facto take over frontier AI R&D, with enough economic prosperity and human figureheads to ensure humanity’s complacency. And when later there’s superintelligence, humanity might notice that it’s left with a tiny insignificant share in resources of the reachable universe and no prospect at all of ever changing this, even on cosmic timescales.
Agree; I’m strongly in favor of using a term like “disempowerment-risk” over “extinction-risk” in communication to laypeople – I think the latter detracts from the more important question of preventing a loss of control and emphasizes the thing that happens after, which is far more speculative (and often invites the common “sci-fi scenario” criticism).
Of course, it doesn’t sound as flashy, but I think saying “we shouldn’t build a machine that takes control of our entire future” is sufficiently attention-grabbing.
Fast takeoff is not takeover is not extinction. For example gradual disempowerment without a fast takeoff leads to takeover, which may lead to either extinction or permanent disempowerment depending on values of AIs.
I think it’s quite plausible that AGIs merely de facto take over frontier AI R&D, with enough economic prosperity and human figureheads to ensure humanity’s complacency. And when later there’s superintelligence, humanity might notice that it’s left with a tiny insignificant share in resources of the reachable universe and no prospect at all of ever changing this, even on cosmic timescales.
Agree; I’m strongly in favor of using a term like “disempowerment-risk” over “extinction-risk” in communication to laypeople – I think the latter detracts from the more important question of preventing a loss of control and emphasizes the thing that happens after, which is far more speculative (and often invites the common “sci-fi scenario” criticism).
Of course, it doesn’t sound as flashy, but I think saying “we shouldn’t build a machine that takes control of our entire future” is sufficiently attention-grabbing.