The LLM adoption wave is underway, which means current AI companies are receiving valuations at very high multiples relative to ARR, and correspondingly high investment rounds. Current ARR only matters as a lower bound (with some non-instant deceleration that should be priced in), an ingredient for models of where it’s going. Spending is going to mostly follow what can be raised, even if it’s out of touch with current levels of ARR, so comparing them is little use. Once growth slows, spending will adjust.
The crucial question is when and where the growth will slow down, but this is extremely uncertain, especially because the levels of LLM capabilities attainable in the short term (and therefore TAM) remain extremely uncertain. For this, the relevant indicators are ARR growth, valuation to ARR ratios, and TAM estimates. Currently the TAM estimates seem to be the closest to indicating the time to a possible slowdown, if capabilities don’t significantly improve. For example it would be hard to find much more than $100bn in revenue from programming or legal assistants if they are not truly autonomous (say $20K per year for 5M professionals). And so in 2026-2028 these kinds of markets should start to visibly saturate, at least as far as Anthropic-level 10x YoY ARR growth is concerned, if capabilities don’t sufficiently improve.
OpenAI can probably achieve Meta/Google-style revenue just from monetizing free users, since they’re already one of the biggest platforms in the world, with a clear path to increasing eyeballs through model progress+new modalities and use cases, and building up an app ecosystem (e.g. their widely rumored web browser). An anonymous OpenAI investor explains the basic logic:
The investor argues that the math for investing at the $500 billion valuation is straightforward: Hypothetically, if ChatGPT hits 2 billion users and monetizes at $5 per user per month—“half the rate of things like Google or Facebook”—that’s $120 billion in annual revenue.
However, this might take a long time to fully realize, perhaps like 10 years?
OpenAI is a special case, they’ve captured the casual/free userbase. But my point is that ~$100bn per year is the current softcap (if capabilities don’t significantly improve), about this much is feasible with either monetization of free users or AI assistants for highly paid knowledge workers, and probably all the LLM-enabled apps (on foundation model company APIs) also add up to something. Right now, it’s not even clear if Anthropic or OpenAI will stumble first, since OpenAI’s API business is weaker and their free user base already nears saturation, it can’t keep growing very quickly for too long.
An interesting question is whether $250-500bn per year is feasible for one company within the current adoption wave, because that enables 10-20 GW training systems. There’s a lot OpenAI might be able to do to monetize their users, and also they might be able to become competitive through API. The on-site casual users won’t leave them at the drop of a hat even if their products are slightly worse than those of their competitors, while the API users will easily switch to them if their products become better. So apart from Google, OpenAI seems to be the best positioned for becoming able to build the largest training systems.
The LLM adoption wave is underway, which means current AI companies are receiving valuations at very high multiples relative to ARR, and correspondingly high investment rounds. Current ARR only matters as a lower bound (with some non-instant deceleration that should be priced in), an ingredient for models of where it’s going. Spending is going to mostly follow what can be raised, even if it’s out of touch with current levels of ARR, so comparing them is little use. Once growth slows, spending will adjust.
The crucial question is when and where the growth will slow down, but this is extremely uncertain, especially because the levels of LLM capabilities attainable in the short term (and therefore TAM) remain extremely uncertain. For this, the relevant indicators are ARR growth, valuation to ARR ratios, and TAM estimates. Currently the TAM estimates seem to be the closest to indicating the time to a possible slowdown, if capabilities don’t significantly improve. For example it would be hard to find much more than $100bn in revenue from programming or legal assistants if they are not truly autonomous (say $20K per year for 5M professionals). And so in 2026-2028 these kinds of markets should start to visibly saturate, at least as far as Anthropic-level 10x YoY ARR growth is concerned, if capabilities don’t sufficiently improve.
OpenAI can probably achieve Meta/Google-style revenue just from monetizing free users, since they’re already one of the biggest platforms in the world, with a clear path to increasing eyeballs through model progress+new modalities and use cases, and building up an app ecosystem (e.g. their widely rumored web browser). An anonymous OpenAI investor explains the basic logic:
However, this might take a long time to fully realize, perhaps like 10 years?
OpenAI is a special case, they’ve captured the casual/free userbase. But my point is that ~$100bn per year is the current softcap (if capabilities don’t significantly improve), about this much is feasible with either monetization of free users or AI assistants for highly paid knowledge workers, and probably all the LLM-enabled apps (on foundation model company APIs) also add up to something. Right now, it’s not even clear if Anthropic or OpenAI will stumble first, since OpenAI’s API business is weaker and their free user base already nears saturation, it can’t keep growing very quickly for too long.
An interesting question is whether $250-500bn per year is feasible for one company within the current adoption wave, because that enables 10-20 GW training systems. There’s a lot OpenAI might be able to do to monetize their users, and also they might be able to become competitive through API. The on-site casual users won’t leave them at the drop of a hat even if their products are slightly worse than those of their competitors, while the API users will easily switch to them if their products become better. So apart from Google, OpenAI seems to be the best positioned for becoming able to build the largest training systems.