If the companies need capital—and I believe that they do—what better option do they have?
I think you’re imagining cash-rich companies choosing to sell portions for dubious reasons, when they could just keep it all for themselves.
But in fact, the companies are burning cash, and to continue operating they need to raise at some valuation, or else not be able to afford the next big training run.
The valuations at which they are raising are, roughly, where supply and demand equilibriate for the amounts of cash that they need in order to continue operating. (Possibly they could raise at higher valuations from taking on less-scrupulous investors, but to date I believe some of the companies have tried to avoid this.)
I don’t doubt they need capital. And the Nigerian prince who needs $5,000 to claim the $100 million inheritance does too. It’s the fact that he/they can’t get capital at something coming anywhere close to the claimed value that’s suspicious.
Amodei is forecasting AI that writes 90% of code in three to six months according to this recent comments. Is Anthropic really burning cash so fast that they can’t wait a quarter, demonstrate to investors that AI has essentially solved software, and then raise at 10x the valuation?
Is Amodei forecasting that, in 3 to 6 months, AI will produce 90% of the value derived from written code, or just that AI will produce 90% of code, by volume? It would not surprise me if 90% of new “art” (defined as non-photographic, non-graph images) by volume is currently AI-generated, and I would not be surprised to see the same thing happen with code.
And in the same way that “AI produces 90% of art-like images” is not the same thing as “AI has solved art”, I expect “AI produces 90% of new lines of code” is not the same thing as “AI has solved software”.
Yea, fair enough. His prediction was: “I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
The second one is more hedged (“may be a world”) but “essentially all the code” must translate to a very large fraction of all the value even if that last 1% or whatever is of outsize economic significance.
Amodei is forecasting AI that writes 90% of code in three to six months according to his recent comments.
I vaguely recall hearing something like this, but with crucial qualifiers that disclaim the implied confidence you are gesturing at. I expect I would’ve noticed more vividly if this statement didn’t come with clear qualifiers. Knowing the original statement would resolve this.
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
So, as I read that he’s not hedging on 90% in 3 to 6 months, but he is hedging on “essentially all” (99% or whatever that means) in a year.
Here’s the place in the interview where he says this (at 16:16). So there were no crucial qualifiers for the 3-6 months figure, which in hindsight makes sense, since it’s near enough to likely refer to his impression of an already existing AI available at Anthropic internally[1]. Maybe also corroborated in his mind with some knowledge about capabilities of a reasoning model based on GPT-4.5, which is almost certainly available internally at OpenAI.
Probably a reasoning model based on a larger pretrained model than Sonnet 3.7. He recently announced in another interview that a model larger than Sonnet 3.7 is due to come out in “relatively small number of time units” (at 12:35). So probably the plan is to release in a few weeks, but something could go wrong and then it’ll take longer. Possibly long reasoning won’t be there immediately if there isn’t enough compute to run it, and the 3-6 months figure refers to when he expects enough inference compute for long reasoning to be released.
I appreciate the question you’re asking, to be clear! I’m less familiar with Anthropic’s funding / Dario’s comments, but I don’t think the magnitudes of ask-vs-realizable-value are as far off for OpenAI as your comment suggests?
Eg, If you compare OpenAI’s reported raised at $157B most recently, vs. what its maximum profit-cap likely was in the old (still current afaik) structure.
The comparison gets a little confusing, because it’s been reported that this investment was contingent on for-profit conversion, which does away with the profit cap.
But I definitely don’t think OpenAI’s recent valuation and the prior profit-cap would be magnitudes apart.
(To be clear, I don’t know the specific cap value, but you can estimate it—for instance by analyzing MSFT’s initial funding amount, which is reported to have a 100x capped-profit return, and then adjust for what % of the company you think MSFT got.)
(This also makes sense to me for a company in a very competitive industry, with high regulatory risk, and where companies are reported to still be burning lots and lots of cash.)
If the companies need capital—and I believe that they do—what better option do they have?
I think you’re imagining cash-rich companies choosing to sell portions for dubious reasons, when they could just keep it all for themselves.
But in fact, the companies are burning cash, and to continue operating they need to raise at some valuation, or else not be able to afford the next big training run.
The valuations at which they are raising are, roughly, where supply and demand equilibriate for the amounts of cash that they need in order to continue operating. (Possibly they could raise at higher valuations from taking on less-scrupulous investors, but to date I believe some of the companies have tried to avoid this.)
I don’t doubt they need capital. And the Nigerian prince who needs $5,000 to claim the $100 million inheritance does too. It’s the fact that he/they can’t get capital at something coming anywhere close to the claimed value that’s suspicious.
Amodei is forecasting AI that writes 90% of code in three to six months according to this recent comments. Is Anthropic really burning cash so fast that they can’t wait a quarter, demonstrate to investors that AI has essentially solved software, and then raise at 10x the valuation?
Is Amodei forecasting that, in 3 to 6 months, AI will produce 90% of the value derived from written code, or just that AI will produce 90% of code, by volume? It would not surprise me if 90% of new “art” (defined as non-photographic, non-graph images) by volume is currently AI-generated, and I would not be surprised to see the same thing happen with code.
And in the same way that “AI produces 90% of art-like images” is not the same thing as “AI has solved art”, I expect “AI produces 90% of new lines of code” is not the same thing as “AI has solved software”.
Yea, fair enough. His prediction was: “I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
The second one is more hedged (“may be a world”) but “essentially all the code” must translate to a very large fraction of all the value even if that last 1% or whatever is of outsize economic significance.
I vaguely recall hearing something like this, but with crucial qualifiers that disclaim the implied confidence you are gesturing at. I expect I would’ve noticed more vividly if this statement didn’t come with clear qualifiers. Knowing the original statement would resolve this.
The original statement is:
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
So, as I read that he’s not hedging on 90% in 3 to 6 months, but he is hedging on “essentially all” (99% or whatever that means) in a year.
Here’s the place in the interview where he says this (at 16:16). So there were no crucial qualifiers for the 3-6 months figure, which in hindsight makes sense, since it’s near enough to likely refer to his impression of an already existing AI available at Anthropic internally[1]. Maybe also corroborated in his mind with some knowledge about capabilities of a reasoning model based on GPT-4.5, which is almost certainly available internally at OpenAI.
Probably a reasoning model based on a larger pretrained model than Sonnet 3.7. He recently announced in another interview that a model larger than Sonnet 3.7 is due to come out in “relatively small number of time units” (at 12:35). So probably the plan is to release in a few weeks, but something could go wrong and then it’ll take longer. Possibly long reasoning won’t be there immediately if there isn’t enough compute to run it, and the 3-6 months figure refers to when he expects enough inference compute for long reasoning to be released.
I appreciate the question you’re asking, to be clear! I’m less familiar with Anthropic’s funding / Dario’s comments, but I don’t think the magnitudes of ask-vs-realizable-value are as far off for OpenAI as your comment suggests?
Eg, If you compare OpenAI’s reported raised at $157B most recently, vs. what its maximum profit-cap likely was in the old (still current afaik) structure.
The comparison gets a little confusing, because it’s been reported that this investment was contingent on for-profit conversion, which does away with the profit cap.
But I definitely don’t think OpenAI’s recent valuation and the prior profit-cap would be magnitudes apart.
(To be clear, I don’t know the specific cap value, but you can estimate it—for instance by analyzing MSFT’s initial funding amount, which is reported to have a 100x capped-profit return, and then adjust for what % of the company you think MSFT got.)
(This also makes sense to me for a company in a very competitive industry, with high regulatory risk, and where companies are reported to still be burning lots and lots of cash.)