It’s useful to separately consider extinction and disempowerment. It’s not an unusual position that the considered decision of an AGI civilization is to avoid killing everyone. This coexists with possibly much higher probablity of expected disempowerment. (For example, my expectation for the next few years while the LLMs are scaling is 90% disempowerment and 30% extinction, conditional on AGI in that timeframe, with most of extinction being misuse or rogue AGIs that would later regret this decision or don’t end up representative in the wider AGI civilization. Extinction gets more weight with AGIs that don’t rely on human datasets as straightforwardly.)
I think the argument for shutdown survives replacement of extinction with disempowerment-or-extinction, which is essentially the meaning of existential risk. Disempowerment is already pretty bad.
The distinction can be useful for reducing probablity of extinction-given-disempowerment, trying to make frontier AI systems pseudokind. This unfortunately gives another argument for competition between labs, given the general pursuit of disempowerment of humanity.
It’s useful to separately consider extinction and disempowerment. It’s not an unusual position that the considered decision of an AGI civilization is to avoid killing everyone. This coexists with possibly much higher probablity of expected disempowerment. (For example, my expectation for the next few years while the LLMs are scaling is 90% disempowerment and 30% extinction, conditional on AGI in that timeframe, with most of extinction being misuse or rogue AGIs that would later regret this decision or don’t end up representative in the wider AGI civilization. Extinction gets more weight with AGIs that don’t rely on human datasets as straightforwardly.)
I think the argument for shutdown survives replacement of extinction with disempowerment-or-extinction, which is essentially the meaning of existential risk. Disempowerment is already pretty bad.
The distinction can be useful for reducing probablity of extinction-given-disempowerment, trying to make frontier AI systems pseudokind. This unfortunately gives another argument for competition between labs, given the general pursuit of disempowerment of humanity.