That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.