Amodei is forecasting AI that writes 90% of code in three to six months according to his recent comments.
I vaguely recall hearing something like this, but with crucial qualifiers that disclaim the implied confidence you are gesturing at. I expect I would’ve noticed more vividly if this statement didn’t come with clear qualifiers. Knowing the original statement would resolve this.
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
So, as I read that he’s not hedging on 90% in 3 to 6 months, but he is hedging on “essentially all” (99% or whatever that means) in a year.
Here’s the place in the interview where he says this (at 16:16). So there were no crucial qualifiers for the 3-6 months figure, which in hindsight makes sense, since it’s near enough to likely refer to his impression of an already existing AI available at Anthropic internally[1]. Maybe also corroborated in his mind with some knowledge about capabilities of a reasoning model based on GPT-4.5, which is almost certainly available internally at OpenAI.
Probably a reasoning model based on a larger pretrained model than Sonnet 3.7. He recently announced in another interview that a model larger than Sonnet 3.7 is due to come out in “relatively small number of time units” (at 12:35). So probably the plan is to release in a few weeks, but something could go wrong and then it’ll take longer. Possibly long reasoning won’t be there immediately if there isn’t enough compute to run it, and the 3-6 months figure refers to when he expects enough inference compute for long reasoning to be released.
I vaguely recall hearing something like this, but with crucial qualifiers that disclaim the implied confidence you are gesturing at. I expect I would’ve noticed more vividly if this statement didn’t come with clear qualifiers. Knowing the original statement would resolve this.
The original statement is:
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
So, as I read that he’s not hedging on 90% in 3 to 6 months, but he is hedging on “essentially all” (99% or whatever that means) in a year.
Here’s the place in the interview where he says this (at 16:16). So there were no crucial qualifiers for the 3-6 months figure, which in hindsight makes sense, since it’s near enough to likely refer to his impression of an already existing AI available at Anthropic internally[1]. Maybe also corroborated in his mind with some knowledge about capabilities of a reasoning model based on GPT-4.5, which is almost certainly available internally at OpenAI.
Probably a reasoning model based on a larger pretrained model than Sonnet 3.7. He recently announced in another interview that a model larger than Sonnet 3.7 is due to come out in “relatively small number of time units” (at 12:35). So probably the plan is to release in a few weeks, but something could go wrong and then it’ll take longer. Possibly long reasoning won’t be there immediately if there isn’t enough compute to run it, and the 3-6 months figure refers to when he expects enough inference compute for long reasoning to be released.