One thing I will say is that I’m maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn’t go through, even in scenarios where humanity is basically as careless as possible.
This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism matters much more than their selfish desires on stellar/galaxy wide scales, combined with me being a relatively big believer in quite a few risks like suffering risks being very cheap to solve via moral trade where most humans are apathetic on.
More generally, I’ve become mostly convinced of the idea that a crucial positive consideration on any post-AGI/ASI future is that it’s really, really easy to prevent most of the worst things that can happen in those futures under a broad array of values, even if moral objectivism/moral realism is false and there isn’t much convergence on values amongst the broad population.
Suppose that the humans do have diminishing returns of utility functions. Unfortunately, existing combination of instincts and moral intuitions do not prompt the majority of humans to help the poor, especially those who are far from potential helpers’ set of friends[1], with nearly anything. And those who do so are unlikely to stay in power or were unlikely to receive fortunes or occupy relevant positions.
(Crossposted from EAF)
Nice write-up on the issue.
One thing I will say is that I’m maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn’t go through, even in scenarios where humanity is basically as careless as possible.
This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism matters much more than their selfish desires on stellar/galaxy wide scales, combined with me being a relatively big believer in quite a few risks like suffering risks being very cheap to solve via moral trade where most humans are apathetic on.
More generally, I’ve become mostly convinced of the idea that a crucial positive consideration on any post-AGI/ASI future is that it’s really, really easy to prevent most of the worst things that can happen in those futures under a broad array of values, even if moral objectivism/moral realism is false and there isn’t much convergence on values amongst the broad population.
Edit: I edited in a link.
Suppose that the humans do have diminishing returns of utility functions. Unfortunately, existing combination of instincts and moral intuitions do not prompt the majority of humans to help the poor, especially those who are far from potential helpers’ set of friends[1], with nearly anything. And those who do so are unlikely to stay in power or were unlikely to receive fortunes or occupy relevant positions.
Friends are also likely to be in the same class as the potential helpers.