The picture of abundance that you describe, with material problems solved but spiritual problems ever present, feels pretty unlikely to me.
Here’s how the landscape looks to me now. Most scenarios of the future are bad, where humans get thrown to the side. Then there’s a small valley of good scenarios, of which the main one I think is the “housecats” scenario: superintelligences build a world where humans will live well. Not just materially well, but well in general. Because an aligned superintelligence will understand our spiritual needs (including the need to get something only with effort, the need to be needed for something, and so on) just as well as material ones.
And separating the huge plain of bad scenarios from the small valley of good, there’s a jagged wall of cliffs—scenarios where superintelligences got part of our values, but not all. Many of these cliffs are S-risks or similar, much worse than the ordinary bad result. Others are like the abundance scenario, where material needs get met but spiritual ones don’t. Or the inverse scenario, where spiritual needs get satisfied but material ones don’t. I think there are too many of these cliffs to describe, and we should try to not end up in the cliffs at all.
This seems relevant to plausibility of permanent disempowerment (originally-humans being denied potential to eventually match originally-AI superintelligences in level of development). Not being permanently disempowered is at least an occasionally occurring spiritual need, so perhaps distinguishing permanent disempowerment as a major possibility runs a similar risk of privileging this particular spiritual need as being more at risk (from superintelligence that otherwise doesn’t kill everyone).
Mostly excluding permanent disempowerment from possible outcomes (even if by stipulation) could also be a good exercise in hardening arguments against the motte-bailey of humanity being “OK” (surviving) with permanent disempowerment vs. humanity being doomed to permanent disempowerment (two framings with very different valence describing exactly the same outcomes). Wanting to believe humanity survives might result in ballooning expectations of permanent disempowerment, while skepticism about AIs sharing cosmic endowment with powerless humanity might result in shrinking of expected non-disempowerment eutopia outcomes.
So forcing probability of extinction plus disempowerment to be close to that of extinction alone puts these pressures in conflict with each other. If permanent disempowerment is not an option, asking for humanity’s survival translates into asking for the eutopia outcomes with no permanent disempowerment to get more likely. And skepticism about AIs sharing cosmic endowment translates into expecting human extinction, rather than merely ballooning permanent disempowerment that eats some of the probability of eutopia.
The picture of abundance that you describe, with material problems solved but spiritual problems ever present, feels pretty unlikely to me.
Here’s how the landscape looks to me now. Most scenarios of the future are bad, where humans get thrown to the side. Then there’s a small valley of good scenarios, of which the main one I think is the “housecats” scenario: superintelligences build a world where humans will live well. Not just materially well, but well in general. Because an aligned superintelligence will understand our spiritual needs (including the need to get something only with effort, the need to be needed for something, and so on) just as well as material ones.
And separating the huge plain of bad scenarios from the small valley of good, there’s a jagged wall of cliffs—scenarios where superintelligences got part of our values, but not all. Many of these cliffs are S-risks or similar, much worse than the ordinary bad result. Others are like the abundance scenario, where material needs get met but spiritual ones don’t. Or the inverse scenario, where spiritual needs get satisfied but material ones don’t. I think there are too many of these cliffs to describe, and we should try to not end up in the cliffs at all.
This seems relevant to plausibility of permanent disempowerment (originally-humans being denied potential to eventually match originally-AI superintelligences in level of development). Not being permanently disempowered is at least an occasionally occurring spiritual need, so perhaps distinguishing permanent disempowerment as a major possibility runs a similar risk of privileging this particular spiritual need as being more at risk (from superintelligence that otherwise doesn’t kill everyone).
Mostly excluding permanent disempowerment from possible outcomes (even if by stipulation) could also be a good exercise in hardening arguments against the motte-bailey of humanity being “OK” (surviving) with permanent disempowerment vs. humanity being doomed to permanent disempowerment (two framings with very different valence describing exactly the same outcomes). Wanting to believe humanity survives might result in ballooning expectations of permanent disempowerment, while skepticism about AIs sharing cosmic endowment with powerless humanity might result in shrinking of expected non-disempowerment eutopia outcomes.
So forcing probability of extinction plus disempowerment to be close to that of extinction alone puts these pressures in conflict with each other. If permanent disempowerment is not an option, asking for humanity’s survival translates into asking for the eutopia outcomes with no permanent disempowerment to get more likely. And skepticism about AIs sharing cosmic endowment translates into expecting human extinction, rather than merely ballooning permanent disempowerment that eats some of the probability of eutopia.