I think this depends a lot on what the state and scope of disempowerment looks like. E.g. if humans get the solar system and the AI gets the rest of the lightcone that seems like a good outcome to me.
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
And yet many people consider it a non-doom outcome. So many worlds that fall to x-risk but have survivors are considered non-doom worlds, which should make it clear that “doom” and x-risk shouldn’t be treated as interchangeable. The optimists claiming 10-20% of P(doom) might be at the same time implicitly expecting 90-98% P(x-risk) according to such definitions.
I think this depends a lot on what the state and scope of disempowerment looks like. E.g. if humans get the solar system and the AI gets the rest of the lightcone that seems like a good outcome to me.
This illustrates my point, 1 star out of 4 billion galaxies solidly falls under Bostrom’s definition of existential risk:
And yet many people consider it a non-doom outcome. So many worlds that fall to x-risk but have survivors are considered non-doom worlds, which should make it clear that “doom” and x-risk shouldn’t be treated as interchangeable. The optimists claiming 10-20% of P(doom) might be at the same time implicitly expecting 90-98% P(x-risk) according to such definitions.