Yeah, 5x or 10x productivity gains from AI for any one developer seem pretty high, and maybe implausible in most cases. However, note that if 10 people in a 1,000-person company get a 10x speedup, that’s only a ~10% overall speedup, which is significant but not enough that you’d expect to be able to clearly point at the company’s output and say “wow, they clearly sped up because of AI.”
For me, I’d say a lot of my gains come from asking AI questions rather than generating code directly. Generating code is also useful though, especially for small snippets like regex. At Google, we have something similar to Copilot that autocompletes code, which is one of the most useful AI features IMO since the generated code is always small enough to understand. 25% of code at Google is now generated by AI, a statistic which probably mostly comes from that feature.
There are a few small PRs I wrote for my job which were probably 5-10x faster than the counterfactual where I couldn’t use AI, where I pretty much had the AI write the whole thing and edited from there. But these were one-offs where I wasn’t very familiar with the language (SQL, HTML), which means it would have been unusually slow without AI.
For me, I’d say a lot of my gains come from asking AI questions rather than generating code directly.
This is often the case for me as well. I often work on solo side projects and use Claude to think out loud. This lets me put on different hats, just like when pair programming, including: design mode, implementation mode, testing mode, and documentation mode.
I rarely use generated code as-is, but I do find it interesting to look at. As a concrete example, I recently implemented a game engine for the board game Azul (and multithreaded solver engine) in Rust and found Claude very helpful for being an extra set of eyes. I used it sort of a running issue tracker, design partner, and critic.
Now that I think about it, maybe the best metaphor I can use is that Claude helps me project myself onto myself. For many of my projects, I lean towards “write good, understandable code” instead of “move fast and break things”. This level of self-criticism and curiosity has served me well with Claude. Without this mentality, I can see why people dismiss LLM-assisted coding; it certainly is far from a magic genie.
I’ve long had a bias toward design-driven work (write the README first, think on a whiteboard, etc), whether it be coding or almost anything, so having an infinitely patient conversational partner can be really amazing at times. At other times, the failure modes are frustrating, to say the least.
However, note that if 10 people in a 1,000-person company get a 10x speedup, that’s only a ~10% overall speedup
Plausible. Potential counter-argument: software engineers aren’t equivalent, “10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?
But these were one-offs where I wasn’t very familiar with the language (SQL, HTML)
Flat cost reductions, yeah. Though, uniformly slashing the costs on becoming proficient in new sub-domains of programming perhaps could have nontrivial effects on the software industry as a whole…
For example, perhaps the actual impact of LLMs should instead be modeled as all (competent) programmers effectively becoming able to use any and all programming languages/tools at offer (plus knowing of the existence of these tools)? Which, in idealized theory, should lead to every piece of a software project being built using the best tools available for it, rather than being warped by what the specific developer happened to be proficient in.
“10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?
In addition to the objection from Archimedes, another reason this is unlikely to be true is that 10x coders are often much more productive than other engineers because they’ve heavily optimized around solving for specific problems or skills that other engineers are bottlenecked by, and most of those optimizations don’t readily admit of having an LLM suddenly inserted into the loop.
“10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs.
I highly doubt this. A 10x engineer is likely already bottlenecked by non-coding work that AI can’t help with, so even if they 10x their coding, they may not increase overall productivity much.
Yeah, 5x or 10x productivity gains from AI for any one developer seem pretty high, and maybe implausible in most cases. However, note that if 10 people in a 1,000-person company get a 10x speedup, that’s only a ~10% overall speedup, which is significant but not enough that you’d expect to be able to clearly point at the company’s output and say “wow, they clearly sped up because of AI.”
For me, I’d say a lot of my gains come from asking AI questions rather than generating code directly. Generating code is also useful though, especially for small snippets like regex. At Google, we have something similar to Copilot that autocompletes code, which is one of the most useful AI features IMO since the generated code is always small enough to understand. 25% of code at Google is now generated by AI, a statistic which probably mostly comes from that feature.
There are a few small PRs I wrote for my job which were probably 5-10x faster than the counterfactual where I couldn’t use AI, where I pretty much had the AI write the whole thing and edited from there. But these were one-offs where I wasn’t very familiar with the language (SQL, HTML), which means it would have been unusually slow without AI.
This is often the case for me as well. I often work on solo side projects and use Claude to think out loud. This lets me put on different hats, just like when pair programming, including: design mode, implementation mode, testing mode, and documentation mode.
I rarely use generated code as-is, but I do find it interesting to look at. As a concrete example, I recently implemented a game engine for the board game Azul (and multithreaded solver engine) in Rust and found Claude very helpful for being an extra set of eyes. I used it sort of a running issue tracker, design partner, and critic.
Now that I think about it, maybe the best metaphor I can use is that Claude helps me project myself onto myself. For many of my projects, I lean towards “write good, understandable code” instead of “move fast and break things”. This level of self-criticism and curiosity has served me well with Claude. Without this mentality, I can see why people dismiss LLM-assisted coding; it certainly is far from a magic genie.
I’ve long had a bias toward design-driven work (write the README first, think on a whiteboard, etc), whether it be coding or almost anything, so having an infinitely patient conversational partner can be really amazing at times. At other times, the failure modes are frustrating, to say the least.
Plausible. Potential counter-argument: software engineers aren’t equivalent, “10x engineers” are a thing, and if we assume they’re high-agency people always looking to streamline and improve their workflows, we should expect them to be precisely the people who get a further 10x boost from LLMs. Have you observed any specific people suddenly becoming 10x more prolific?
Flat cost reductions, yeah. Though, uniformly slashing the costs on becoming proficient in new sub-domains of programming perhaps could have nontrivial effects on the software industry as a whole…
For example, perhaps the actual impact of LLMs should instead be modeled as all (competent) programmers effectively becoming able to use any and all programming languages/tools at offer (plus knowing of the existence of these tools)? Which, in idealized theory, should lead to every piece of a software project being built using the best tools available for it, rather than being warped by what the specific developer happened to be proficient in.
In addition to the objection from Archimedes, another reason this is unlikely to be true is that 10x coders are often much more productive than other engineers because they’ve heavily optimized around solving for specific problems or skills that other engineers are bottlenecked by, and most of those optimizations don’t readily admit of having an LLM suddenly inserted into the loop.
I highly doubt this. A 10x engineer is likely already bottlenecked by non-coding work that AI can’t help with, so even if they 10x their coding, they may not increase overall productivity much.