We seem to think that people will develop AGI because it can undercut labor on pricing.
But with Sam Altman talking about 20,000/month agents, that is not actually that much cheaper than software engineers fully loaded. If that agent only replaces a single employee, it does not seem cheaper if the cost overruns even a little more, to 40,000/month.
That is to say, if AGI is 2.5 OOM from the current cost to serve of chatgpt pro, it is not cheaper than hiring low or mid-level employees.
But it still might have advantages
First, you can buy more subscriptions by negotatiing a higher paralelism or api limit, by enterprise math, so it means you need a 1-2 week negotiation not a 3-6 month hiring process to increase headcount.
The trillion dollar application of AI companies is not labor, it is hiring. If it turns out that provisioning AI systems is hard, they lose their economic appeal unless you plan on dogfooding like the major labs are doing.
Are you expecting the Cost/Productivity ratio of AGI in the future to be roughly the same as it is for the agents Sam is currently proposing? I would expect that as time passes, the capabilities of such agents will vastly increase while they also get cheaper. This seems to generally be the case with technology, and previous technology had no potential means of self-improving on a short timescale. The potential floor for AI “wages” is also incredibly low compared to humans.
It definitely is worth also keeping in mind that AI labor should be much easier to scale than human labor in part because of the hiring issue, but a relatively high(?) price on initial agents isn’t enough to update me away from the massive potential it has to undercut labor.
The floor for AI wages is always going to be whatever the market will bear, the question is how much margin will the AGI developer be able to take, which depends on how much the AGI models commoditize and how much pricing power the lab retains, not on how much it costs to serve except as a floor. We should not expect otherwise.
There is a cost for AGI at which humans are competitive.
If AGI becomes competitive at captial costs that no firm can raise, it is not competitive, and we will be waiting on algorithmics again.
Algorithmic improvement is not predictable by me, so I have a wide spread there.
I do think that provisioning vs hiring and flexibility in retasking will be a real point of competition, in addition to raw prices
I think we agree that AGI has the potential to undercut labor. I was fairly certain my spread was 5% uneconomical, 20% right for some actors, 50% large dispaclemt and 25 percent of total winning, and I was trying to work out what levels of pricing look uneconomical and what frictions are important to compare.
We seem to think that people will develop AGI because it can undercut labor on pricing.
But with Sam Altman talking about 20,000/month agents, that is not actually that much cheaper than software engineers fully loaded. If that agent only replaces a single employee, it does not seem cheaper if the cost overruns even a little more, to 40,000/month.
That is to say, if AGI is 2.5 OOM from the current cost to serve of chatgpt pro, it is not cheaper than hiring low or mid-level employees.
But it still might have advantages
First, you can buy more subscriptions by negotatiing a higher paralelism or api limit, by enterprise math, so it means you need a 1-2 week negotiation not a 3-6 month hiring process to increase headcount.
The trillion dollar application of AI companies is not labor, it is hiring. If it turns out that provisioning AI systems is hard, they lose their economic appeal unless you plan on dogfooding like the major labs are doing.
Are you expecting the Cost/Productivity ratio of AGI in the future to be roughly the same as it is for the agents Sam is currently proposing? I would expect that as time passes, the capabilities of such agents will vastly increase while they also get cheaper. This seems to generally be the case with technology, and previous technology had no potential means of self-improving on a short timescale. The potential floor for AI “wages” is also incredibly low compared to humans.
It definitely is worth also keeping in mind that AI labor should be much easier to scale than human labor in part because of the hiring issue, but a relatively high(?) price on initial agents isn’t enough to update me away from the massive potential it has to undercut labor.
The floor for AI wages is always going to be whatever the market will bear, the question is how much margin will the AGI developer be able to take, which depends on how much the AGI models commoditize and how much pricing power the lab retains, not on how much it costs to serve except as a floor. We should not expect otherwise.
There is a cost for AGI at which humans are competitive.
If AGI becomes competitive at captial costs that no firm can raise, it is not competitive, and we will be waiting on algorithmics again.
Algorithmic improvement is not predictable by me, so I have a wide spread there.
I do think that provisioning vs hiring and flexibility in retasking will be a real point of competition, in addition to raw prices
I think we agree that AGI has the potential to undercut labor. I was fairly certain my spread was 5% uneconomical, 20% right for some actors, 50% large dispaclemt and 25 percent of total winning, and I was trying to work out what levels of pricing look uneconomical and what frictions are important to compare.