one problem with UBI as a solution for AI economic disruption: at the moment when AI can first replace a human job, it will probably cost only epsilon less than the human. the cost will be mostly capital (datacenters, chips, electric plants, etc), rather than labor. so we can only afford to give the human epsilon UBI. as time goes on, eventually the AI gets cheap enough that humans can get substantial UBI, possibly exceeding their original income, as the AIs become more productive than the humans were. but there’s a big gap in the middle that we need to bridge somehow. the best case scenario is that different industries get automated at different times so that the gaps don’t line up, and we can redistribute the surplus from the first industries to be automated to fill the gap for later industries. the worst case is that all the gaps happen at once and we all starve to death because the surplus is not enough to keep people alive.
This can’t be right. The troublesome point you describe happens when there are already enough “AI workers” to displace all current jobs, but the extra productivity is still only epsilon (why?) and the number of “AI workers” isn’t growing explosively far beyond that (why?)
Anyway, the real problem isn’t that capital owners won’t have enough money to pay us UBI. It’s that that they… won’t pay us UBI. Simple as that.
I don’t understand your objection. right now, the cost of replacing a given human with AI is greater than the cost of the human (because the compute is very expensive, the AIs are not very good, etc). over time, the AI gets cheaper and cheaper, until at some point it is precisely as expensive as the human. one day thereafter, AI will be very slightly cheaper than the human. you would prefer to pay for the AI compute instead of the human salary. at this moment in time, it will be economically incentivized to fire all the humans and replace them with AIs. because the AIs still cost almost exactly as much as humans at this moment, it won’t be economical to have substantially more AIs than you had humans the day before, because if it were, then we would have hired more humans in the first place; there must be diminishing returns to quantity of humans employed, and the previous equilibrium is still very close to the new equilibrium. but the amount of new value created for the world due to this switch is very small: only the delta between what the humans used to cost and what the AIs now cost.
Your economics are wrong for a few reasons. Let’s grant the hypothetical where all humans supply homogeneous labor at a uniform wage.
If AI is slightly cheaper than humans, what happen is that wages fall slightly. At the new, lower wages, there is more demand for labor (and more humans drop out of the labor force). At the same time, capital costs are bid up slightly. Eventually the price of AI and human labor is equal, and the quantity demanded is equal to the quantity supplied.
At the same time, you are increasing demand for labor to build the AI (right now labor is ultimately the main input to building all the stuff that goes in datacenters). If the social value of the AI is near zero, then the net increase in demand is almost the same as the net increase in supply. Lowering wages and increasing capital costs doesn’t offset the benefits of extra productive capacity, it just shifts value from laborers to capitalists.
The real fiscal issue in this scenario is that you are shifting output from labor to capital, and the tax rate on capital is lower than the tax on labor. (Moreover as you automate the economy there are further corporate reorganizations that would drive effect tax rates well below the on-paper capital gains rate). You’re doing that at the same time that you are potentially increasing spending, which is tough unless you are willing to adjust the tax code.
I’m inclined to agree with other commenters though that none of this seems like the most important issue. The fiscal issues can be overcome if the state cares, and my best guess is that growth will accelerate enough that it would be OK even if there was no political change.
People should have much bigger concerns about being completely materially disempowered: (i) the state may not continue to support them, either because they are politically disempowered or because the state itself is disempowered, and (ii) even if they are able to survive they will have no say over what the world looks like and that sucks in its own way.
My idea was, maybe the AI company is willing to sell you 1 unit of AI labor at human-competitive price, but if you order 1000 units they’ll ask for a higher price per unit, because they need to build more datacenters or something. In this case replacement of humans will be gradual even if all humans are equally productive. And another possibility is that humans aren’t all equally productive, so AI will first get good enough to replace the worst worker, then the second worst and so on. From these two reasons I get the possibility that by the time lots of people get replaced, the difference in productivity between AI and the average person replaced so far won’t be epsilon. It won’t be the full salary either, but maybe something substantial. Anyway that was it.
the worst case is that all the gaps happen at once and we all starve to death because the surplus is not enough to keep people alive.
This does not follow at all. The total amount of production would somehow have to decrease, otherwise it’s just a question of distribution of resources, which is the whole point of UBI. To literally starve, they would need to shut down some amount of food production (the robots don’t eat).
Good point. Decreased quality of life due to competing with ai for basic resources has already begun (RAM prices) and will eventually show up in non direct goods.
What do you define as “replace a human job”? We are already seeing AI that can replace at least 50% of a job for very much cheaper than the 50% of the cost of paying a worker to do those parts of that job. In principle that means that many employers can fire half their workforce and get the remaining employees to pick up the other 50% of the jobs from the fired employees.
In practice this would involve huge disruption and uncertainty, and perhaps they can avoid that bother by letting go their most obviously least productive employees, lowering costs a little (say to 95%) to do the same or slightly more work with much reduced disruption. Over time, the employees who use more AI in the workflows take less effort to do the job. We are seeing this already.
This obviously isn’t a stable long-term economic behaviour. Those conservative employers probably will continue to decrease workforce slowly, while being eroded by employers much more willing to accept disruption eating into their markets at greatly reduced costs.
However, it takes time. The more capable that AI becomes per unit cost, the greater the advantage that disruption-tolerant employers will have, possibly leading to multiple larger failures of conservative employers or possibly rapid culture changes to avoid such failures, replacing large segments of workforce at some later point.
In this model (which matches what we are already seeing), the job losses are inevitable but come some economically significant and somewhat unpredictable time after the cost of AI drops well below the cost of employing a human to do some tasks.
There’s nothing that requires an economy to maintain a continuous equilibrium of perfectly distributed cost/productivity balances at all times, and we see plenty of past examples where it has not. Continuous changes to parameters in a complex system often result in sudden changes in behaviour, not just continuous ones.
it doesn’t matter whether you’re fully replacing one job, or partially replacing multiple jobs. my model still implies that the market value of human labor diminishes more than the amount of money needed to keep everyone at the same level of consumption as they did before
You seem to be describing a situation where there is a temporary absence of sufficient funds for a UBI (the “big gap in the middle”) after which there’s plenty of money to fund the UBI, potentially at a higher level than people’s original income.
The generic solution for a temporary lack of necessary funds with lots of funds being available in the future is getting a loan to be paid off when the money comes in. This consumption-smoothing would be good from the perspective of the AI companies as well, as “everyone is out of work and has no money to spend”, if it persists for long enough for people to burn through their savings, predictably leads to “the revenue streams of the AI companies collapse”.
How it would work out in detail is unclear, but if AI companies end up with a lot of economic power, I’d expect that gets taxed in some form by whoever’s providing the UBI, and in the meantime the UBI provider goes into a bit of debt.
one problem with UBI as a solution for AI economic disruption: at the moment when AI can first replace a human job, it will probably cost only epsilon less than the human. the cost will be mostly capital (datacenters, chips, electric plants, etc), rather than labor. so we can only afford to give the human epsilon UBI. as time goes on, eventually the AI gets cheap enough that humans can get substantial UBI, possibly exceeding their original income, as the AIs become more productive than the humans were. but there’s a big gap in the middle that we need to bridge somehow. the best case scenario is that different industries get automated at different times so that the gaps don’t line up, and we can redistribute the surplus from the first industries to be automated to fill the gap for later industries. the worst case is that all the gaps happen at once and we all starve to death because the surplus is not enough to keep people alive.
This can’t be right. The troublesome point you describe happens when there are already enough “AI workers” to displace all current jobs, but the extra productivity is still only epsilon (why?) and the number of “AI workers” isn’t growing explosively far beyond that (why?)
Anyway, the real problem isn’t that capital owners won’t have enough money to pay us UBI. It’s that that they… won’t pay us UBI. Simple as that.
Well, and the possibility that the capital “owners” might effectively be the AIs. But yes.
I don’t understand your objection. right now, the cost of replacing a given human with AI is greater than the cost of the human (because the compute is very expensive, the AIs are not very good, etc). over time, the AI gets cheaper and cheaper, until at some point it is precisely as expensive as the human. one day thereafter, AI will be very slightly cheaper than the human. you would prefer to pay for the AI compute instead of the human salary. at this moment in time, it will be economically incentivized to fire all the humans and replace them with AIs. because the AIs still cost almost exactly as much as humans at this moment, it won’t be economical to have substantially more AIs than you had humans the day before, because if it were, then we would have hired more humans in the first place; there must be diminishing returns to quantity of humans employed, and the previous equilibrium is still very close to the new equilibrium. but the amount of new value created for the world due to this switch is very small: only the delta between what the humans used to cost and what the AIs now cost.
Your economics are wrong for a few reasons. Let’s grant the hypothetical where all humans supply homogeneous labor at a uniform wage.
If AI is slightly cheaper than humans, what happen is that wages fall slightly. At the new, lower wages, there is more demand for labor (and more humans drop out of the labor force). At the same time, capital costs are bid up slightly. Eventually the price of AI and human labor is equal, and the quantity demanded is equal to the quantity supplied.
At the same time, you are increasing demand for labor to build the AI (right now labor is ultimately the main input to building all the stuff that goes in datacenters). If the social value of the AI is near zero, then the net increase in demand is almost the same as the net increase in supply. Lowering wages and increasing capital costs doesn’t offset the benefits of extra productive capacity, it just shifts value from laborers to capitalists.
The real fiscal issue in this scenario is that you are shifting output from labor to capital, and the tax rate on capital is lower than the tax on labor. (Moreover as you automate the economy there are further corporate reorganizations that would drive effect tax rates well below the on-paper capital gains rate). You’re doing that at the same time that you are potentially increasing spending, which is tough unless you are willing to adjust the tax code.
I’m inclined to agree with other commenters though that none of this seems like the most important issue. The fiscal issues can be overcome if the state cares, and my best guess is that growth will accelerate enough that it would be OK even if there was no political change.
People should have much bigger concerns about being completely materially disempowered: (i) the state may not continue to support them, either because they are politically disempowered or because the state itself is disempowered, and (ii) even if they are able to survive they will have no say over what the world looks like and that sucks in its own way.
My idea was, maybe the AI company is willing to sell you 1 unit of AI labor at human-competitive price, but if you order 1000 units they’ll ask for a higher price per unit, because they need to build more datacenters or something. In this case replacement of humans will be gradual even if all humans are equally productive. And another possibility is that humans aren’t all equally productive, so AI will first get good enough to replace the worst worker, then the second worst and so on. From these two reasons I get the possibility that by the time lots of people get replaced, the difference in productivity between AI and the average person replaced so far won’t be epsilon. It won’t be the full salary either, but maybe something substantial. Anyway that was it.
This does not follow at all. The total amount of production would somehow have to decrease, otherwise it’s just a question of distribution of resources, which is the whole point of UBI. To literally starve, they would need to shut down some amount of food production (the robots don’t eat).
food production still consumes resources that the robots do care about. fuel, machinery, logistics capacity etc.
Good point. Decreased quality of life due to competing with ai for basic resources has already begun (RAM prices) and will eventually show up in non direct goods.
What do you define as “replace a human job”? We are already seeing AI that can replace at least 50% of a job for very much cheaper than the 50% of the cost of paying a worker to do those parts of that job. In principle that means that many employers can fire half their workforce and get the remaining employees to pick up the other 50% of the jobs from the fired employees.
In practice this would involve huge disruption and uncertainty, and perhaps they can avoid that bother by letting go their most obviously least productive employees, lowering costs a little (say to 95%) to do the same or slightly more work with much reduced disruption. Over time, the employees who use more AI in the workflows take less effort to do the job. We are seeing this already.
This obviously isn’t a stable long-term economic behaviour. Those conservative employers probably will continue to decrease workforce slowly, while being eroded by employers much more willing to accept disruption eating into their markets at greatly reduced costs.
However, it takes time. The more capable that AI becomes per unit cost, the greater the advantage that disruption-tolerant employers will have, possibly leading to multiple larger failures of conservative employers or possibly rapid culture changes to avoid such failures, replacing large segments of workforce at some later point.
In this model (which matches what we are already seeing), the job losses are inevitable but come some economically significant and somewhat unpredictable time after the cost of AI drops well below the cost of employing a human to do some tasks.
There’s nothing that requires an economy to maintain a continuous equilibrium of perfectly distributed cost/productivity balances at all times, and we see plenty of past examples where it has not. Continuous changes to parameters in a complex system often result in sudden changes in behaviour, not just continuous ones.
it doesn’t matter whether you’re fully replacing one job, or partially replacing multiple jobs. my model still implies that the market value of human labor diminishes more than the amount of money needed to keep everyone at the same level of consumption as they did before
That seems like an argument for establishing UBI sooner rather than later.
You seem to be describing a situation where there is a temporary absence of sufficient funds for a UBI (the “big gap in the middle”) after which there’s plenty of money to fund the UBI, potentially at a higher level than people’s original income.
The generic solution for a temporary lack of necessary funds with lots of funds being available in the future is getting a loan to be paid off when the money comes in. This consumption-smoothing would be good from the perspective of the AI companies as well, as “everyone is out of work and has no money to spend”, if it persists for long enough for people to burn through their savings, predictably leads to “the revenue streams of the AI companies collapse”.
How it would work out in detail is unclear, but if AI companies end up with a lot of economic power, I’d expect that gets taxed in some form by whoever’s providing the UBI, and in the meantime the UBI provider goes into a bit of debt.
in this model, as soon as an ai is epsilon cheaper than a human, humans stop getting hired?