An outcome where humans are left the solar system while AI(s) colonize the universe need not be a (Bostrom-defined) negative existential outcome. The Bostrom definition specifies “Earth-originating intelligent life”, and human-derived AIs count.
It could still be a negative existential outcome if the AIs have alien values such that we judge (using our values) that they are not fulfilling their potential. However, in this hypothetical the AIs have concluded that the optimal configuration of the atoms in the solar system is to be a place to leave humans. This implies that the AIs do not have fully alien values, as discussed in The sun is big but (unaligned) superintelligences will not spare Earth a little sunlight. And that further implies that the AI will do broadly good things with the rest of the universe too, albeit not the same things I would do.
My guess is that if you poke at people who consider “humans are left the solar system” to be a non-doom outcome you would find that most of them would consider it a doom outcome if the rest of the universe were converted into paperclips.
A stance on Crunches, Shrieks, and Whimpers
I would broadly include all of Bangs, Crunches, Shrieks, and Whimpers as “doom”, based on my reading of Bostrom’s definitions. Again, I’m curious to read examples of people who explicitly don’t include one or more of these categories, I expect most would.
I think that doesn’t fully clarify, though. The definitions of Shrieks and Whimpers include subjective elements, which I have bolded:
Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.
Whimpers—A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.
Sometimes people define “doom” or “x-risk” to include cases where humanity achieves only, say, 1% of its potential. I would not include these:
I think it’s very likely that there is no outcome where all living humans agree that we have achieved 1% of our potential. Your heaven is not my heaven. Framing such outcomes as “doom” could make it harder for humanity to coordinate.
In life, it’s common for an outcome in the top 1% of possible outcomes to still be less than 1% of the best possible outcome. The best possible outcome of going to a social event is meeting your optimal spouse/co-founder/guru and living happily ever after. But if I knew in advance that a given social event would not have an outcome 1% as good as that, I would not consider the social event doomed.
So given those factors it’s natural for me to interpret “a minuscule degree” and “an extremely narrow band” much more narrowly than someone with other intuitions.
Being left the solar system (by AI)
An outcome where humans are left the solar system while AI(s) colonize the universe need not be a (Bostrom-defined) negative existential outcome. The Bostrom definition specifies “Earth-originating intelligent life”, and human-derived AIs count.
It could still be a negative existential outcome if the AIs have alien values such that we judge (using our values) that they are not fulfilling their potential. However, in this hypothetical the AIs have concluded that the optimal configuration of the atoms in the solar system is to be a place to leave humans. This implies that the AIs do not have fully alien values, as discussed in The sun is big but (unaligned) superintelligences will not spare Earth a little sunlight. And that further implies that the AI will do broadly good things with the rest of the universe too, albeit not the same things I would do.
My guess is that if you poke at people who consider “humans are left the solar system” to be a non-doom outcome you would find that most of them would consider it a doom outcome if the rest of the universe were converted into paperclips.
A stance on Crunches, Shrieks, and Whimpers
I would broadly include all of Bangs, Crunches, Shrieks, and Whimpers as “doom”, based on my reading of Bostrom’s definitions. Again, I’m curious to read examples of people who explicitly don’t include one or more of these categories, I expect most would.
I think that doesn’t fully clarify, though. The definitions of Shrieks and Whimpers include subjective elements, which I have bolded:
Sometimes people define “doom” or “x-risk” to include cases where humanity achieves only, say, 1% of its potential. I would not include these:
I think it’s very likely that there is no outcome where all living humans agree that we have achieved 1% of our potential. Your heaven is not my heaven. Framing such outcomes as “doom” could make it harder for humanity to coordinate.
In life, it’s common for an outcome in the top 1% of possible outcomes to still be less than 1% of the best possible outcome. The best possible outcome of going to a social event is meeting your optimal spouse/co-founder/guru and living happily ever after. But if I knew in advance that a given social event would not have an outcome 1% as good as that, I would not consider the social event doomed.
So given those factors it’s natural for me to interpret “a minuscule degree” and “an extremely narrow band” much more narrowly than someone with other intuitions.