(Crossposted from EAF)
Nice write-up on the issue.
One thing I will say is that I’m maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn’t go through, even in scenarios where humanity is basically as careless as possible.
This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism matters much more than their selfish desires on stellar/galaxy wide scales, combined with me being a relatively big believer in quite a few risks like suffering risks being very cheap to solve via moral trade where most humans are apathetic on.
More generally, I’ve become mostly convinced of the idea that a crucial positive consideration on any post-AGI/ASI future is that it’s really, really easy to prevent most of the worst things that can happen in those futures under a broad array of values, even if moral objectivism/moral realism is false and there isn’t much convergence on values amongst the broad population.
Edit: I edited in a link.
An underrated answer is that humans are very, very dependent on other people to survive, and we have easily the longest childhood where we are vulnerable of any mammal, and even once we do become an adult, we are still really, really bad at surviving on our own compared to other animals, and since we are K-selected, every dead child mattters a lot in evolution, so it’s very, very difficult for sociopathy to be selected for.