The problem with this argument is that it ignores a unique feature of AIs—their copiability. It takes ~20 years and O($300k) to spin up a new human worker. It takes ~20 minutes to spin up a new AI worker.
So in the long run, for a human to economically do a task, they have to not just have some comparative advantage but have a comparative advantage that’s large enough to cover the massive cost differential in “producing” a new one.
This actually analogizes more to engines. I would argue that a big factor in the near-total replacement of horses by engines is not so much that engines are exactly 100x better than horses at everything, but that engines can be mass-produced. In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes. But any time there’s a niche where engines are even slightly better than horses, we can just increase production of engines more quickly and cheaply than we can increase production of horses.
These economic concepts such as comparative advantage tend to assume, for ease of analysis, a fixed quantity of workers. When you are talking about human workers in the short term, that is a reasonable simplifying assumption. But it leads you astray when you try to use these concepts to think about AIs (or engines).
In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes.
I came to comment mainly on this claim in the OP, so I’ll put it here: In particular, at a glance, horses can reproduce, find their own food and fuel, self-repair, and learn new skills to execute independently or semi-independently. These advantages were not sufficient in practice to save (most) horses from the impact of engines, and I do not see why I should expect humans to fare better.
I also find the claim that humans fare worse in a world of expensive robotics than in a world of cheap robotics to be strange. If in one scenario, A costs about as much as B, and in another it costs 1000x as much as B, but in both cases B can do everything A can do equally well or better, plus the supply of B is much more elastic than the supply of A, then why would anyone in the second scenario keep buying A except during a short transitional period?
When we invented steam engines and built trains, horses did great for a while, because their labor became more productive. Then we got all the other types of things with engines, and the horses no longer did so great, even though they still had (and in fact still have) a lot of capabilities the replacement technology lacked.
These economic concepts such as comparative advantage tend to assume, for ease of analysis, a fixed quantity of workers. When you are talking about human workers in the short term, that is a reasonable simplifying assumption. But it leads you astray when you try to use these concepts to think about AIs (or engines).
I think this is a central simplifying assumption that makes a lot of economists assume away AI potential, because AI directly threatens the model where the quantity of workers is fixed, and this is probably the single biggest difference from me compared to people like Tyler Cowen, though in his case he doesn’t believe population growth matters much, while I consider it to first order be the single most important thing powering our economy as it is today.
Agreed on population. to a first approximation it’s directly proportional to the supply of labor, supply of new ideas, quantity of total societal wealth, and market size for any particular good or service. That last one also means that with a larger population, the economic value of new innovations goes up, meaning we can profitably invest more resources in developing harder-to-invent things.
I really don’t know how that impact (more minds) will compare to the improved capabilities of those minds. We’ve also never had a single individual with as much ‘human capital’ as a single AI can plausibly achieve, even if its each capability is only around human level, and polymaths are very much overrepresented among the people most likely to have impactful new ideas.
The problem with this argument is that it ignores a unique feature of AIs—their copiability. It takes ~20 years and O($300k) to spin up a new human worker. It takes ~20 minutes to spin up a new AI worker.
So in the long run, for a human to economically do a task, they have to not just have some comparative advantage but have a comparative advantage that’s large enough to cover the massive cost differential in “producing” a new one.
This actually analogizes more to engines. I would argue that a big factor in the near-total replacement of horses by engines is not so much that engines are exactly 100x better than horses at everything, but that engines can be mass-produced. In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes. But any time there’s a niche where engines are even slightly better than horses, we can just increase production of engines more quickly and cheaply than we can increase production of horses.
These economic concepts such as comparative advantage tend to assume, for ease of analysis, a fixed quantity of workers. When you are talking about human workers in the short term, that is a reasonable simplifying assumption. But it leads you astray when you try to use these concepts to think about AIs (or engines).
Exactly, yes.
Also:
I came to comment mainly on this claim in the OP, so I’ll put it here: In particular, at a glance, horses can reproduce, find their own food and fuel, self-repair, and learn new skills to execute independently or semi-independently. These advantages were not sufficient in practice to save (most) horses from the impact of engines, and I do not see why I should expect humans to fare better.
I also find the claim that humans fare worse in a world of expensive robotics than in a world of cheap robotics to be strange. If in one scenario, A costs about as much as B, and in another it costs 1000x as much as B, but in both cases B can do everything A can do equally well or better, plus the supply of B is much more elastic than the supply of A, then why would anyone in the second scenario keep buying A except during a short transitional period?
When we invented steam engines and built trains, horses did great for a while, because their labor became more productive. Then we got all the other types of things with engines, and the horses no longer did so great, even though they still had (and in fact still have) a lot of capabilities the replacement technology lacked.
I think this is a central simplifying assumption that makes a lot of economists assume away AI potential, because AI directly threatens the model where the quantity of workers is fixed, and this is probably the single biggest difference from me compared to people like Tyler Cowen, though in his case he doesn’t believe population growth matters much, while I consider it to first order be the single most important thing powering our economy as it is today.
Agreed on population. to a first approximation it’s directly proportional to the supply of labor, supply of new ideas, quantity of total societal wealth, and market size for any particular good or service. That last one also means that with a larger population, the economic value of new innovations goes up, meaning we can profitably invest more resources in developing harder-to-invent things.
I really don’t know how that impact (more minds) will compare to the improved capabilities of those minds. We’ve also never had a single individual with as much ‘human capital’ as a single AI can plausibly achieve, even if its each capability is only around human level, and polymaths are very much overrepresented among the people most likely to have impactful new ideas.