Thanks, I think these are interesting points.
I agree that some power concentration is likely necessary, and that it could be a lot, though I’m pretty unsure there.
In terms of what to do about that:
Do you have ideas about what widening political control over a small number of AIs would look like?
Another alternative to crossing our fingers would be to distribute strategic power but also build AI resilience. This doesn’t work if there are big capability gaps I think, but if capabilities are relatively evenly distributed then there might be ways to build defensive enclaves
Thanks for the comment Michael.
A minor quibble is that I think it’s not clear you need ASI to end up with dangerous levels of power concentration, so you might need to ban AGI, and to do that you might need to ban AI development pretty soon.
I’ve been meaning to read your post though, so will do that soon.