This trend is actually kind of good for AI safety...? Because it means you just have to control and regulate N frontier labs, with N=4 or so, not billions of individuals who are potentially crazy.
Create a global AI non-proliferation treaty, make sure the strongest models are only available to the big labs, and make sure the big labs follow some highly cautious and bureaucratic process when rolling out a new model. It won’t be easy to implement, but it’s not inconceivable.
Rephrasing of your motto: don’t build a huge empire, because eventually that empire will grow corrupt. It’s better to have an Archipelago of City-states, because when an individual city decays, it’s not a global catastrophe.
But Eliezer seems to think that we need a global regulatory agency for AI. It’s a plausible enough idea, but what happens when the agency falls into corruption like all the other crappy 3-letter agencies run by the US and the UN?