I would be interested to know how you think things are going to go in the 95-99% of non-doom worlds. Do you expect AI to look like “ChatGPT but bigger, broader, and better” in the sense of being mostly abstracted and boxed away into individual usage cases/situations? Do you expect AIs to be ~100% in command but just basically aligned and helpful?
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.
This seems a little bit like a homunculus sitting behind the eyes- the governance makes the AIs aligned and helpful, but why is the governance basically aligned and helpful? I am particularly concerned about the permanent loss of labor strikes and open rebellion as negotiation options for the non-governance people.
SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.
I would be interested to know how you think things are going to go in the 95-99% of non-doom worlds. Do you expect AI to look like “ChatGPT but bigger, broader, and better” in the sense of being mostly abstracted and boxed away into individual usage cases/situations? Do you expect AIs to be ~100% in command but just basically aligned and helpful?
These are vibes, not predictions.
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
Why doesn’t SpaceX run a country?
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.
This seems a little bit like a homunculus sitting behind the eyes- the governance makes the AIs aligned and helpful, but why is the governance basically aligned and helpful? I am particularly concerned about the permanent loss of labor strikes and open rebellion as negotiation options for the non-governance people.
Do you think governance is currently misaligned. It seems fine to me?
How do you explain the news? Why do MM predictors keep missing negative surprises there?
I think current governments are kept in check, which scales differently than being aligned when the capabilities of the government are increased.
SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.