The prediction is correct on all counts, and perhaps slightly understates progress (though it obviously makes weak/ambiguous claims across the board).
The claim that “coding and research agents are beginning to transform their professions” is straightforwardly true (e.g. 50% of Google lines of code are now generated by AI). The METR study was concentrated in March (which is early 2025).
And it is not currently “mid-late 2025”, it is 16 days after the exact midpoint of the year.
Where is that 50% number from? Perhaps you are referring to this post from google research. If so, you seem to have taken it seriously out of context. Here is the text before the chart that shows 50% completion:
With the advent of transformer architectures, we started exploring how to apply LLMs to software development. LLM-based inline code completion is the most popular application of AI applied to software development: it is a natural application of LLM technology to use the code itself as training data. The UX feels natural to developers since word-level autocomplete has been a core feature of IDEs for many years. Also, it’s possible to use a rough measure of impact, e.g., the percentage of new characters written by AI. For these reasons and more, it made sense for this application of LLMs to be the first to deploy.
Our earlier blog describes the ways in which we improve user experience with code completion and how we measure impact. Since then, we have seen continued fast growth similar to other enterprise contexts, with an acceptance rate by software engineers of 37%[1] assisting in the completion of 50% of code characters[2]. In other words, the same amount of characters in the code are now completed with AI-based assistance as are manually typed by developers. While developers still need to spend time reviewing suggestions, they have more time to focus on code design.
This is referring to inline code completion—so its more like advanced autocomplete than an AI coding agent. It’s hard to interpret this number, but it seems very unlikely this means half the coding is being done by AI and much more likely that it is often easy to predict how a line of code will end given the first half of that line of code and the previous context. Probably 15-20% of what I type into a standard linux terminal is autocompleted without AI?
Also, the right metric is how much AI assistance is speeding up coding. I know of only one study on this, from METR, which showed that it is slowing down coding.
The prediction is correct on all counts, and perhaps slightly understates progress (though it obviously makes weak/ambiguous claims across the board).
The claim that “coding and research agents are beginning to transform their professions” is straightforwardly true (e.g. 50% of Google lines of code are now generated by AI). The METR study was concentrated in March (which is early 2025).
And it is not currently “mid-late 2025”, it is 16 days after the exact midpoint of the year.
Where is that 50% number from? Perhaps you are referring to this post from google research. If so, you seem to have taken it seriously out of context. Here is the text before the chart that shows 50% completion:
This is referring to inline code completion—so its more like advanced autocomplete than an AI coding agent. It’s hard to interpret this number, but it seems very unlikely this means half the coding is being done by AI and much more likely that it is often easy to predict how a line of code will end given the first half of that line of code and the previous context. Probably 15-20% of what I type into a standard linux terminal is autocompleted without AI?
Also, the right metric is how much AI assistance is speeding up coding. I know of only one study on this, from METR, which showed that it is slowing down coding.