AI-pilled people say that some form of major un/underemployment is in the near future for humanity.
This misses the subtle idea of comparative advantage, i.e:
“Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at.”
Future AI’s will eventually be better than every human at everything, but humans will still have a human-economy because AI’s will have much better things to do.
I think this makes a lot of sense and mostly agree. But I want to pose the question: what if AI’s run of out useful things to do?
I don’t mean they’ll run out of useful things to do forever, but what if AI’s run into “atom” bottlenecks the way that humans already do?
I’m using a working definition of aligned AI that goes something like:
AI systems mostly act autonomously, but are aligned with individual and societal interests
We make (mostly) reasonable tradeoffs about when and where humans must be in the loop to look over and approve further actions by these systems
More or less, systems like Claude Code or DeepResearch with waiting periods of hours/days/weeks between human check-in time instead of minutes.
Assuming we have aligned AI systems in the future, think about this:
AI-1 is tasked with developing cures to cancers. It reads all the literature, has some critical insight, and then sends a report to run experiments X, Y, Z to some humans and then report back with the results.
While waiting for humans to finish up the experiment: what does AI-1 do?
In the days/weeks it takes to run the experiments maybe AI-1 will be tasked with solving some other class of disease. It reads all the literature, has some critical insight, needs more data, and sends more humans off to run more experiments in atom-realm.
Eventually, AI-1 no longer has diseases (of human interest) to analyze and has to wait until experiments finish. We do not have enough wet labs and humans running around cultivating petri dishes to keep it busy. (or even if we have robot wet-labs, we are still bottlenecked by the time it takes to cultivate cultures and so on).
It’s unclear to me what we’ll allocate AI-1 to do next (assuming it’s not fully autonomous). And in a world where AI-1 is fully autonomous and aligned, I’m not sure AI-1 will know either.
This is what makes me unsure about the comparative advantage point. At this point, I imagine someone (or AI-1 itself) determines that in the meantime, it can act as AI-1-MD and consult with patients in need.
And then maybe there are no more patients to screen (perhaps everyone is incredibly healthy, or we have more than enough AIs for everyone to have personalized doctors). AI-1-MD has to find something else to do.
There’s a wide band of how long this period of “atom bottlenecks” remains. In some areas (like solving all diseases), I imagine the incentives will be aligned enough where we’ll look to remove the wetlab/experimentation bottleneck. But I think the world looks very different on if that bottlenecks takes 2 years or 20 years.
In a world where it takes 2 years to solve the “experimentation” bottleneck, then AI-1 can use its comparative advantage to pursue research and probably won’t replace doctors/lawyers/whatever it’s next best alternative is. But if it takes 20 years to solve these bottlenecks, then maybe we a lot of AI-1′s time is spent towards replacing large functions of being a doctor/lawyer/etc.
AI’s don’t “labor” the same ways humans do. They won’t need 20-30 years of training for advanced jobs, they’ll always have access to all the knowledge they’ll need to be an expert in an instant. They won’t think in minutes/hours, they’ll think at the speed of processors—which is milliseconds and nanoseconds. They’ll likely be able to context switch to various domains with no penalty.
A really plausible reality to me is that many cognitive and intellectual tasks will be delegated to future AI systems because they’ll be far faster, better, and cheaper than most humans, and they won’t have anywhere else to point their ability at.
contra @noahpinion’s piece on AI comparative advantage
https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the
TL;DR
AI-pilled people say that some form of major un/underemployment is in the near future for humanity.
This misses the subtle idea of comparative advantage, i.e:
“Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at.”
Future AI’s will eventually be better than every human at everything, but humans will still have a human-economy because AI’s will have much better things to do.
I think this makes a lot of sense and mostly agree. But I want to pose the question: what if AI’s run of out useful things to do?
I don’t mean they’ll run out of useful things to do forever, but what if AI’s run into “atom” bottlenecks the way that humans already do?
I’m using a working definition of aligned AI that goes something like:
AI systems mostly act autonomously, but are aligned with individual and societal interests
We make (mostly) reasonable tradeoffs about when and where humans must be in the loop to look over and approve further actions by these systems
More or less, systems like Claude Code or DeepResearch with waiting periods of hours/days/weeks between human check-in time instead of minutes.
Assuming we have aligned AI systems in the future, think about this:
AI-1 is tasked with developing cures to cancers. It reads all the literature, has some critical insight, and then sends a report to run experiments X, Y, Z to some humans and then report back with the results.
While waiting for humans to finish up the experiment: what does AI-1 do?
In the days/weeks it takes to run the experiments maybe AI-1 will be tasked with solving some other class of disease. It reads all the literature, has some critical insight, needs more data, and sends more humans off to run more experiments in atom-realm.
Eventually, AI-1 no longer has diseases (of human interest) to analyze and has to wait until experiments finish. We do not have enough wet labs and humans running around cultivating petri dishes to keep it busy. (or even if we have robot wet-labs, we are still bottlenecked by the time it takes to cultivate cultures and so on).
It’s unclear to me what we’ll allocate AI-1 to do next (assuming it’s not fully autonomous). And in a world where AI-1 is fully autonomous and aligned, I’m not sure AI-1 will know either.
This is what makes me unsure about the comparative advantage point. At this point, I imagine someone (or AI-1 itself) determines that in the meantime, it can act as AI-1-MD and consult with patients in need.
And then maybe there are no more patients to screen (perhaps everyone is incredibly healthy, or we have more than enough AIs for everyone to have personalized doctors). AI-1-MD has to find something else to do.
There’s a wide band of how long this period of “atom bottlenecks” remains. In some areas (like solving all diseases), I imagine the incentives will be aligned enough where we’ll look to remove the wetlab/experimentation bottleneck. But I think the world looks very different on if that bottlenecks takes 2 years or 20 years.
In a world where it takes 2 years to solve the “experimentation” bottleneck, then AI-1 can use its comparative advantage to pursue research and probably won’t replace doctors/lawyers/whatever it’s next best alternative is. But if it takes 20 years to solve these bottlenecks, then maybe we a lot of AI-1′s time is spent towards replacing large functions of being a doctor/lawyer/etc.
AI’s don’t “labor” the same ways humans do. They won’t need 20-30 years of training for advanced jobs, they’ll always have access to all the knowledge they’ll need to be an expert in an instant. They won’t think in minutes/hours, they’ll think at the speed of processors—which is milliseconds and nanoseconds. They’ll likely be able to context switch to various domains with no penalty.
A really plausible reality to me is that many cognitive and intellectual tasks will be delegated to future AI systems because they’ll be far faster, better, and cheaper than most humans, and they won’t have anywhere else to point their ability at.
They could be tasked with solving the entropic death of the universe á la Asimov’s Last Question.