This seems relevant to plausibility of permanent disempowerment (originally-humans being denied potential to eventually match originally-AI superintelligences in level of development). Not being permanently disempowered is at least an occasionally occurring spiritual need, so perhaps distinguishing permanent disempowerment as a major possibility runs a similar risk of privileging this particular spiritual need as being more at risk (from superintelligence that otherwise doesn’t kill everyone).
Mostly excluding permanent disempowerment from possible outcomes (even if by stipulation) could also be a good exercise in hardening arguments against the motte-bailey of humanity being “OK” (surviving) with permanent disempowerment vs. humanity being doomed to permanent disempowerment (two framings with very different valence describing exactly the same outcomes). Wanting to believe humanity survives might result in ballooning expectations of permanent disempowerment, while skepticism about AIs sharing cosmic endowment with powerless humanity might result in shrinking of expected non-disempowerment eutopia outcomes.
So forcing probability of extinction plus disempowerment to be close to that of extinction alone puts these pressures in conflict with each other. If permanent disempowerment is not an option, asking for humanity’s survival translates into asking for the eutopia outcomes with no permanent disempowerment to get more likely. And skepticism about AIs sharing cosmic endowment translates into expecting human extinction, rather than merely ballooning permanent disempowerment that eats some of the probability of eutopia.
This seems relevant to plausibility of permanent disempowerment (originally-humans being denied potential to eventually match originally-AI superintelligences in level of development). Not being permanently disempowered is at least an occasionally occurring spiritual need, so perhaps distinguishing permanent disempowerment as a major possibility runs a similar risk of privileging this particular spiritual need as being more at risk (from superintelligence that otherwise doesn’t kill everyone).
Mostly excluding permanent disempowerment from possible outcomes (even if by stipulation) could also be a good exercise in hardening arguments against the motte-bailey of humanity being “OK” (surviving) with permanent disempowerment vs. humanity being doomed to permanent disempowerment (two framings with very different valence describing exactly the same outcomes). Wanting to believe humanity survives might result in ballooning expectations of permanent disempowerment, while skepticism about AIs sharing cosmic endowment with powerless humanity might result in shrinking of expected non-disempowerment eutopia outcomes.
So forcing probability of extinction plus disempowerment to be close to that of extinction alone puts these pressures in conflict with each other. If permanent disempowerment is not an option, asking for humanity’s survival translates into asking for the eutopia outcomes with no permanent disempowerment to get more likely. And skepticism about AIs sharing cosmic endowment translates into expecting human extinction, rather than merely ballooning permanent disempowerment that eats some of the probability of eutopia.