So thinking about the kinds of things I would want a superintelligence to pursue in an optimistic scenario where we can just write its goals into a human-legible soul doc and that scales all the way, “human flourishing” and “sentient flourishing” both seem incorrect; since there would be other moral patients (most of whom would almost certainly be AI) and also I don’t want the atoms of me and my kids rearranged different-beings-that-could-flourish-better-wise.
“Pareto improvement” reconciles these but isn’t right either; plenty of people would be worse off in utopia (by their own lights) because they have a degree of unaccountable power over others now that worth more than any creature comforts would be.
So thinking about the kinds of things I would want a superintelligence to pursue in an optimistic scenario where we can just write its goals into a human-legible soul doc and that scales all the way, “human flourishing” and “sentient flourishing” both seem incorrect; since there would be other moral patients (most of whom would almost certainly be AI) and also I don’t want the atoms of me and my kids rearranged different-beings-that-could-flourish-better-wise.
“Pareto improvement” reconciles these but isn’t right either; plenty of people would be worse off in utopia (by their own lights) because they have a degree of unaccountable power over others now that worth more than any creature comforts would be.