Both OpenAI’s and Anthropic’s revenue has increased massively in one year: roughly 3½-fold for OpenAI and 9-fold for Anthropic.
Their product is in demand, they lose money on each customer, so they take in a lot of money to grow their customer base and lose more money.
They need to transition to making money. To do so they need something like network effects (social media, Uber/Lyft to some extent), returns to scale, or some massive first mover advantage. I don’t see that yet.
As you say, one area where they are already starting to be genuinely useful is some more routine forms of coding. A leading indicator I think you should be looking at is that, according to Google, they’re recently reached “50% of code by character count was generated by LLMs”.
That’s less than I was expecting. And my personal experience of coding with LLMs (and speaking with others who do) is that it takes a lot of work to make it function—the LLM will write most of the code, but it’s often a long process from there to a working program—and a much longer process to a working, interpretable program. And much longer to get a working program that fits well into a codebase.
When you code with LLMs, it feels like you’re really productive, because you’re always doing stuff—but often it actually slows you down. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Now, I feel that coding algorithms are better than they were in that study, especially for routine tasks.
So my median expectation is that moving 50% of coding might increase google productivity by 10%. But 25% or −5% are also possible.
In general, something growing via an exponential or logistic-curve process looks small until shortly before it isn’t — and that’s even more true when it’s competing with an established alternative.
Shipping finished code is a process involving a lot of steps, only some of which are automated. So (Amdahl’s Law) the time to finished coding will be determined by those parts of the process that aren’t easily automated. If time to write code falls to zero but time to review code stays the same or even increases, then we’ll only get a mild speedup.
The other problem is that logistic curves close to their inflection point, logistic curves way before their inflection points, and true exponentials—all look the same (see our paper https://arxiv.org/abs/2109.08065 ). Ok, we might be on the verge of great LLM-based improvements—but these have been promised for a few years now. And (this is entirely my personal feeling) they feel further away now than they did in the GPT 3.5 era.
In simple economic terms, other than Tesla, the other six of the “magnificent seven” have not (so far) reached the Price/Earnings levels characteristic of bubbles just before they burst — they look more typical of those for a historically-fast-growing company.
The magnificent seven have strong non-AI income streams. I expect them to survive a bubble burst. If OpenAI had stocks, their P/E ratio would be… interesting. Well, actually, it would be quite boring, because it would be negative.
That scenario is not impossible. If we aren’t in a bubble, I’d expect something like that to happen.
It’s still premised on the idea that more training/inference/ressources will result in qualitative improvements.
We’ve seen model after model being better and better, without any of them overcoming the fundamental limitations of the genre. Fundamentally, they still break when out of distribution (this is hidden in part by their extensive training which puts more stuff in distribution, without solving the issue).
So your scenario is possible; I had similar expectations a few years ago. But I’m seeing more and more evidence against it, so I’m giving it a lower probability (maybe 20%).