This idea kind of rhymes with gain-of-function research in a way that makes me uncomfortable. “Let’s intentionally create harmful things, but its OK because we are creating harmful things for the purpose of preventing the harm that would be caused by those things.”
I’m not sure if I can formalize this into a logically-tight case against doing it, but it seems conceptually similar to X, and X is bad.
The problem with this argument is that it ignores a unique feature of AIs—their copiability. It takes ~20 years and O($300k) to spin up a new human worker. It takes ~20 minutes to spin up a new AI worker.
So in the long run, for a human to economically do a task, they have to not just have some comparative advantage but have a comparative advantage that’s large enough to cover the massive cost differential in “producing” a new one.
This actually analogizes more to engines. I would argue that a big factor in the near-total replacement of horses by engines is not so much that engines are exactly 100x better than horses at everything, but that engines can be mass-produced. In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes. But any time there’s a niche where engines are even slightly better than horses, we can just increase production of engines more quickly and cheaply than we can increase production of horses.
These economic concepts such as comparative advantage tend to assume, for ease of analysis, a fixed quantity of workers. When you are talking about human workers in the short term, that is a reasonable simplifying assumption. But it leads you astray when you try to use these concepts to think about AIs (or engines).