SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.
SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.