My understanding of the situations, speaking to people in normal firms who code, and management, is that this is all about theory of constraints. As a simplified example, if you previously needed 1 business analyst, one QA tester, and one programmer a day each to do a task, and the programmer’s efficiency doubles, or quintuples, the impact on output is zero, because the firm isn’t set up to go much faster.
Firms need to rebuild their processes around this to take advantage, and that’s only starting to happen, and only at some firms.
However, shouldn’t we expect immediate impact on startups led by early adopters, which could be designed from the ground up around exploiting LLMs? Similar with AGI labs: you’d expect them to rapidly reform around that.
Yet, I haven’t heard of a single example of a startup shipping a five-year/ten-year project in one year, or a new software firm eating the lunch of ossified firms ten times its size.
There is no reorganization that can increase the tempo of other organizations (pace of customer feedback), which is often the key bottleneck in software already. The same speed dynamic is not new, it is just in sharper focus.
Lemonade is doing something like what you describe in Insurance. I suspect other examples exist. But most market segments, even in “pure“ software, don’t revolve around only the software product, so it is slower to become obvious if better products emerge.
I’m not sure we’d see this starkly if people can change roles and shift between job types, but haven’t we seen firms engage in large rounds of layoffs and follow up by not hiring as many coders already over the past couple years?
Why fire devs that are 10x productive now and you can ship 10x more/faster? You don’t want to overtake your unaugmented competitors and survive those who didn’t fire theirs?
This is basically a version of Tyler Cowen’s argument (summarized here: https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html) for why AGI won’t change things as quickly as we think because once intelligence is no longer a bottleneck, the other constraints become much more binding. Once programmers no longer become the bottleneck, we’ll be bound by the friction that exists elsewhere within companies.
Yes, except that as soon as AI can replace the other sources of friction, we’ll have a fairly explosive takeoff; he thinks these sources of friction will stay forever, while I think they are currently barriers, but the engine for radical takeoff isn’t going to happen via traditional processes adopting the models in individual roles, it will be via new business models developed to take advantage of the technology.
Much like early TV was just videos of people putting on plays, and it took time for people to realize the potential—but once they did, they didn’t make plays that were better for TV, they did something that actually used the medium well. And what using AI well would mean, in context of business implications is cutting out human delays, inputs, and required oversight. Which is worrying for several reasons!
My understanding of the situations, speaking to people in normal firms who code, and management, is that this is all about theory of constraints. As a simplified example, if you previously needed 1 business analyst, one QA tester, and one programmer a day each to do a task, and the programmer’s efficiency doubles, or quintuples, the impact on output is zero, because the firm isn’t set up to go much faster.
Firms need to rebuild their processes around this to take advantage, and that’s only starting to happen, and only at some firms.
That makes perfect sense to me.
However, shouldn’t we expect immediate impact on startups led by early adopters, which could be designed from the ground up around exploiting LLMs? Similar with AGI labs: you’d expect them to rapidly reform around that.
Yet, I haven’t heard of a single example of a startup shipping a five-year/ten-year project in one year, or a new software firm eating the lunch of ossified firms ten times its size.
There is no reorganization that can increase the tempo of other organizations (pace of customer feedback), which is often the key bottleneck in software already. The same speed dynamic is not new, it is just in sharper focus.
Lemonade is doing something like what you describe in Insurance. I suspect other examples exist. But most market segments, even in “pure“ software, don’t revolve around only the software product, so it is slower to become obvious if better products emerge.
Why wouldn’t you see the firm freeze programmer hires and start laying off people en mass?
I’m not sure we’d see this starkly if people can change roles and shift between job types, but haven’t we seen firms engage in large rounds of layoffs and follow up by not hiring as many coders already over the past couple years?
Why fire devs that are 10x productive now and you can ship 10x more/faster? You don’t want to overtake your unaugmented competitors and survive those who didn’t fire theirs?
… did you read david’s comment?
This is basically a version of Tyler Cowen’s argument (summarized here: https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html) for why AGI won’t change things as quickly as we think because once intelligence is no longer a bottleneck, the other constraints become much more binding. Once programmers no longer become the bottleneck, we’ll be bound by the friction that exists elsewhere within companies.
Yes, except that as soon as AI can replace the other sources of friction, we’ll have a fairly explosive takeoff; he thinks these sources of friction will stay forever, while I think they are currently barriers, but the engine for radical takeoff isn’t going to happen via traditional processes adopting the models in individual roles, it will be via new business models developed to take advantage of the technology.
Much like early TV was just videos of people putting on plays, and it took time for people to realize the potential—but once they did, they didn’t make plays that were better for TV, they did something that actually used the medium well. And what using AI well would mean, in context of business implications is cutting out human delays, inputs, and required oversight. Which is worrying for several reasons!