Estimates of doom should clarify their stance on permanent disempowerment where originally-humans are never allowed to advance to match the level of development (and influence) of originally-AI superintelligences (or given the relevant resources to get there). Is it doom or is it non-doom? It could be the bulk of the probability, so that’s an important ambiguity.
(Defining the meaning of a thing we are talking about should happen prior to considering whether some fact about it is true, or what credence to give some event that involves it. If we decide that something is true, but still don’t know what it is that’s true, what exactly are we doing?)
I think this depends a lot on what the state and scope of disempowerment looks like. E.g. if humans get the solar system and the AI gets the rest of the lightcone that seems like a good outcome to me.
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
And yet many people consider it a non-doom outcome. So many worlds that fall to x-risk but have survivors are considered non-doom worlds, which should make it clear that “doom” and x-risk shouldn’t be treated as interchangeable. The optimists claiming 10-20% of P(doom) might be at the same time implicitly expecting 90-98% P(x-risk) according to such definitions.
This is a useful application of a probability map! If an important term has multiple competing definitions, create nodes for all of them, link the ones you consider important to a central p(doom) node (assuming you are interested in that concept), and let other people disagree with your assessment, but with a clearer sense of what they specifically disagree about.
Estimates of doom should clarify their stance on permanent disempowerment where originally-humans are never allowed to advance to match the level of development (and influence) of originally-AI superintelligences (or given the relevant resources to get there). Is it doom or is it non-doom? It could be the bulk of the probability, so that’s an important ambiguity.
(Defining the meaning of a thing we are talking about should happen prior to considering whether some fact about it is true, or what credence to give some event that involves it. If we decide that something is true, but still don’t know what it is that’s true, what exactly are we doing?)
I think this depends a lot on what the state and scope of disempowerment looks like. E.g. if humans get the solar system and the AI gets the rest of the lightcone that seems like a good outcome to me.
This illustrates my point, 1 star out of 4 billion galaxies solidly falls under Bostrom’s definition of existential risk:
And yet many people consider it a non-doom outcome. So many worlds that fall to x-risk but have survivors are considered non-doom worlds, which should make it clear that “doom” and x-risk shouldn’t be treated as interchangeable. The optimists claiming 10-20% of P(doom) might be at the same time implicitly expecting 90-98% P(x-risk) according to such definitions.
This is a useful application of a probability map! If an important term has multiple competing definitions, create nodes for all of them, link the ones you consider important to a central p(doom) node (assuming you are interested in that concept), and let other people disagree with your assessment, but with a clearer sense of what they specifically disagree about.