But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.
Those companies are run by humans, so no, of course the world does not look like total human disempowerment to me?
If practically all of the world’s governments and corporations were run by AIs… well, then I expect we would be dead, but if for some reason we were not, it seems very likely that yes, that would constitute total human disempowerment.
SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.
This seems a little bit like a homunculus sitting behind the eyes- the governance makes the AIs aligned and helpful, but why is the governance basically aligned and helpful? I am particularly concerned about the permanent loss of labor strikes and open rebellion as negotiation options for the non-governance people.
These are vibes, not predictions.
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
Why doesn’t SpaceX run a country?
I mean… this still sounds like total human disempowerment to me? Just because the world is split up between 5 different AI systems doesn’t mean anything good is happening? What does “a single actor controls everything” have to do with AI existential risk? You can just have 4 or 40 or 40 billion AI systems control everything and this is just the same.
Does the current world look like total human disempowerment to you? Currently it’s split between like 1000 large companies?
Those companies are run by humans, so no, of course the world does not look like total human disempowerment to me?
If practically all of the world’s governments and corporations were run by AIs… well, then I expect we would be dead, but if for some reason we were not, it seems very likely that yes, that would constitute total human disempowerment.
Also, those companies are not controlling, for example, what I write in this comment, or what room I go into next.
SpaceX doesn’t run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there’s no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don’t have a monopoly on force, they have to maintain cost efficiency and can’t expand into all the functions of government.
An AGI has the important properties of labour and capital and government (i.e. no “Lump of Labour” so it does ’t devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won’t suffer from the same inefficiencies of scale or shortages of staff.
I don’t expect AGIs to respect human laws and tax codes once they have the capability to just kill us.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can’t.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like “AI companies have to pay X tax and heed Y planning system” which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don’t expect that humans will be part of the winning coalition that gets a stake in the future.
This seems a little bit like a homunculus sitting behind the eyes- the governance makes the AIs aligned and helpful, but why is the governance basically aligned and helpful? I am particularly concerned about the permanent loss of labor strikes and open rebellion as negotiation options for the non-governance people.
Do you think governance is currently misaligned. It seems fine to me?
I think current governments are kept in check, which scales differently than being aligned when the capabilities of the government are increased.
How do you explain the news? Why do MM predictors keep missing negative surprises there?
I said “fine” not good. I think it’s been a steady upward trend on everything but animal welfare (and AI but that’s currently what we are discussing)