Yes, the future of humanity being a good place to live (within its resource constraints) follows from it being cheap for superintelligence to ensure (given that it’s decided to let it exist at all), while the constraint of permanent disempowerment (at some level significantly below all of cosmic endowment) is a result of not placing the future of humanity at the level of superintelligence’s own interests. Maybe there’s 2% for actually capturing a significant part of the cosmic endowment (the eutopia outcomes), and 20% for extinction. I’m not giving s-risks much credence, but maybe they still get 1% when broadly construed (any kind of warping in the future of humanity that’s meaningfully at odds with what humanity and even individual humans would’ve wanted to happen on reflection, given the resource constraints to work within).
I should also clarify that by “making it harmless” I simply mean the future of humanity being unable to actually do any harm in the end, perhaps through lacking direct access to the physical level of the world. The point is to avoid negative externalities for the hosting superintelligence, so that the necessary sliver of compute stays within budget. This doesn’t imply any sinister cognitive changes that make the future of humanity incapable of considering the idea or working in that direction.
Yes, the future of humanity being a good place to live (within its resource constraints) follows from it being cheap for superintelligence to ensure (given that it’s decided to let it exist at all), while the constraint of permanent disempowerment (at some level significantly below all of cosmic endowment) is a result of not placing the future of humanity at the level of superintelligence’s own interests. Maybe there’s 2% for actually capturing a significant part of the cosmic endowment (the eutopia outcomes), and 20% for extinction. I’m not giving s-risks much credence, but maybe they still get 1% when broadly construed (any kind of warping in the future of humanity that’s meaningfully at odds with what humanity and even individual humans would’ve wanted to happen on reflection, given the resource constraints to work within).
I should also clarify that by “making it harmless” I simply mean the future of humanity being unable to actually do any harm in the end, perhaps through lacking direct access to the physical level of the world. The point is to avoid negative externalities for the hosting superintelligence, so that the necessary sliver of compute stays within budget. This doesn’t imply any sinister cognitive changes that make the future of humanity incapable of considering the idea or working in that direction.