Here’s an argument for short timelines that I take seriously:
Anthropic revenue has increased by 10x/yr for the last 3 years. At EOY 2025 it was 10B. Maybe it will keep increasing at this rate.
The revenue of the leading AI company will be between 100B/yr and 10T/yr when AGI is achieved. (Why not lower? Maybe but AGI this year seems unlikely. Why not higher? If one companies revenue is on the order of 10% of current wGDP, then the whole AI industry is probably 50-100% of current wGDP, which seems like you probably have AGI by then).
Therefore, AGI will be built sometime between EOY 2026 (when Ant hits 100B on current trends) and EOY 2028 (when Ant hits 10T on current trends).
I think I feel better about (2) then basically any other way of getting an anchor on when AGI will be built because it much directly tracks real world impacts of AI, whereas e.g. it seems really difficult to get any sort of confidence on what oom of effective flops or benchmark score corresponds to AGI.
(1) still seems dubious to me, I think revenue trends will probably slow. But I don’t know when and I could totally imagine them continuing straight to AGI.
(What exactly do I mean by AGI? I don’t think it matters much here, I think this argument goes through pretty well for all reasonable definitions of AGI but lets say I mean that AI R&D would slow down more if you removed the AIs than if you removed the humans).
Note that in order for Anthropic revenue to 10x this year, they’ll already have to increase $/FLOP (i.e. revenue per unit of compute. Profit margins basically.) To increase it another 10x the following year, they’ll probably need to triple $/FLOP, because their compute will only roughly triple next year. Ditto for 2028. All this is a reason to doubt premise 1 basically; in the past they’ve been able to grow revenue in large part by just allocating more of their compute to serving customers, but now they’ll have to charge customers more per FLOP.
I wonder if two other factors might also work against the 10x growth per year as a norm. One, standard S-curve growth patterns for start-up enterprises. What the time period for the high growth phase might be could be interesting—does rapid advancement of AI itself point to longer or shorter period of revenue growth? My quest is it might shorten it but have not really thought about that before.
The other is just opportunity costs. All that money is coming from somewhere, not thin air, and we have some real constraints on monetary velocity as well. So just how much of the money supply can AI revenue growth curves eat up?
The revenue of the leading AI company will be between 100B/yr and 10T/yr when AGI is achieved. (Why not lower? Maybe but AGI this year seems unlikely. Why not higher? If one companies revenue is on the order of 10% of current wGDP, then the whole AI industry is probably 50-100% of current wGDP, which seems like you probably have AGI by then).
am i understanding correctly?
anthropic is growing by 10x per year
on this trend, they will soon have 10T/yr revenues
in order to have 10T/yr revenues, they will need to achieve agi
therefore, they will achieve agi.
this seems rather circular?
my puppy doubled in size over the past few weeks
on this trend, he will become larger than even clifford—known large red dog
in order to become larger than clifford, he will have to be some kind of mutant super-puppy
With AI, there’s an obvious case for it being able to automate the whole economy (humans do everything in the economy, AI could in principle do everything that humans can do). Whereas the reference class of existing puppies strongly suggests that the puppy will stop growing.
I think correct counterarguments need to somehow dispute one of the premises—and it sounds like you are disputing (1). But I feel like you need some reasons to expect that (1) will be false. There are some (e.g. Daniel’s response above), and also reversion to AI industry as a whole trends.
sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
One background fact some commenters are missing: it’s virtually unheard of for a tech startup to continue growing at 100% or more after it reaches $1 billion per year in revenue. A company growing at closer to 1000% per year at the multi-billion revenue level is wildly unprecedented. A company tripling its revenue in one quarter from a starting point of $10 billion, as Anthropic did in Q1,iseven more wildly unprecedented than that.
Revenue growth has momentum, and it is essentially locked in that frontier LLMs will be a bigger business than the biggest tech industries (smartphones, internet advertising) are today.
These events are rare, but not unheard of. Zoom was doubling quarterly in 2021 for a short while at over a $1B run-rate. Moderna 2.5x’d in one quarter from a $7B runrate in 2021. (Both these cases show how fast revenue growth rates can collapse, albeit for different reasons—but note the common case of a shock driving revenue rapidly up).
FWIW, Nvidia continues to double yearly after hitting a $100B runrate.
I think those examples actually reinforce Josh’s point. NVIDIA growth is also from AI. Zoom and Moderna grew because of COVID creating a ginormous demand shock, yet even then, their growth at its peak was slower than Anthropic’s growth despite them starting from a smaller revenue level and therefore it being inherently easier for them to grow. So… unless you can dig up better examples, it seems like Anthropic’s Q1 2026 growth is literally the most impressive growth of any company in history? And this is despite the fact that there’s no COVID-equivalent for AI; there’s no unusual circumstance that created a huge temporary demand shock, instead, it’s just that they made their products better.
Notably, this argument also predicts Anthropic will have a strong lead over their competitors by EOY 2027 ($1 trillion in revenue vs a projected $250 billion for OpenAI, see here) and a decisive lead by EOY 2028 ($10 trillion for Anthropic vs $800 billion for OpenAI).
It also predicts that there would be huge economic returns to OpenAI selling their compute to Anthropic if this revenue growth happened while the compute growth trend of the respective companies matched projections.
I see a few posts like this anchoring AGI timelines to company revenue / GDP, most notably from economists. But I’d like to understand where this intuition comes from..It seems to me similar to the biological anchors or back in the day Kurzweilian anchor to FLOP/s.
For me, GDP anchors aren’t any more intuitive to me for AGI/ ASI any more than number of parameters or FLOP/s intuitions. Like I can totally imagine AI companies having revenue of ~10% of GDP (10T) without an AGI, even with current level AIs proliferating over the next 10 years.
First of all, the implication “AGI --> Loads of revenue” does seem to hold. If one of these companies did get to AGI, they’d pretty quickly get to 1T, then 10T, then 100T ARR.
What about the implication “Loads of revenue --> AGI?” That’s trickier. But the basic intuition is that in order for a company like Anthropic to be making 10T ARR, they must be deploying Claude pretty much across the whole economy. Claude must be embedded in basically everything, and providing a lot of value too, otherwise people wouldn’t be paying 10% of world GDP for it. And it seems like a Claude capable enough to provide that much value to that many different diverse industries, would probably be AGI. If there was still some major skill/ability that it lacked, some major way in which humans were superior, then probably that deficiency would prevent it from making $10T ARR, by limiting it to certain industries or roles that don’t require that skill/ability.
The obvious potential limitation to me is robotics/skilled manual labor. Maybe I’m just misunderstanding something fundamental here, but it seems at least plausible to me that there will be significant fractions of skilled manual labor that’s not automated at the point that Claude’s 10% of world GDP (and AI in general is 20%+).
But I can imagine an AI that is the 99.9th percentile across some disciplines but not all. I’d assume we already spend ~10T for things like engineering talent, medical advice, legal etc.. and that seems like AI companies could make that much (given they can capture a lot of the excess value—assume there’s only a single AI lab and there’s no competition if you will). I can imagine something slightly better than today’s AI’s have that level of revenue after proliferating through the economy for another decade.
Even if it’s deficient in a bunch of other things we are good at (writing, comedy, physical labor, making better AI’s etc..) It seems to me you can get very far without all human skills, but just a subset of them.
I agree actually, that maybe AIs not too different from today’s could get to $10T after proliferating into the economy for another decade.
So perhaps Thomas’ argument should be revised to more specifically be about the next two years or so. If Anthropic or OAI make it to $10T by 2029, then that seems like something that couldn’t be achieved with just slightly better versions of current AIs. There just isn’t enough time to build all the products on top of it, transform all the industries, outcompete the dinosaurs, etc. Whereas if they actually do have a drop-in replacement for human professionals at everything, then yes they’d make it to $10T.
The revenue of the leading AI company will be between 100B/yr and 10T/yr when AGI is achieved.
Does this type of logic work for past experience we’ve had with large economic shifts, such as industrial farming or the internet? For example, do tractors count as AGI for people living in the Middle Ages?
My thought as well. Since flops has limits on speed of growth, $/flop would need to grow quickly. Did $/watt grow very quickly as people found better uses for the energy and built out the complements to support that?
100B in revenue seems awfully low. For context, Walmart did 700B in revenue last year and Toyota did 330B. Neither company is exactly close to AGI. 100B is like 0.1% of wGDP. Its a lot but its hard to draw a line from that to AGI. I think 1T minimum for this kind of argument and I think closer to 10T for this line of reasoning.
I think the Walmart and Toyota case is less interesting because they’re not creating “new” consumption. Like Walmart has a huge revenue because it’s captured a big slice of people’s overall consumption. If Walmart’s revenue doubled next year, it’ll probably because they got a bigger slice, not because people are suddenly buying twice as much stuff.
Continuation of this trend already requires some form of TAI. The method of how AI systems generate value has to radically change. Otherwise who would pay so much money for them?
It’s kinda like making similar argument about parameter numbers and saying “and if it’s more than … parameters, it means Earth surface is all computronium, so obviously AGI was achieved”.
I already mostly believe in the logical implication “no AGI → break of trends”, so “no break of trends → AGI” is not an additional argument.
There used to be a lot of arguments about AI Timelines 5+ years ago of the sort ’if AI is coming why are the markets not reacting”. We’re now on the other side—by already being within the time horizon that markets react to—where the markets themselves are pointing in the directon of AGI, and people instead wonder how to undercount that (e.g. by saying it is a bubble, or that trends must slow).
This reminds me of the argument people make for the existence of life on other planets. “Sure, the chances of life on any given planet may be small, but with such a large number of planets, there’s gotta be life on one of them!”
But if there’s 700 quintillion planets, that fact alone tells you nothing. You’d also have to know that the chance of life occurring on any given planet is at least close to one in 700 quintillion, which we don’t in fact know and have no good way of estimating.
I feel your argument has a similar shape. “If we’re spending that much money on AI, then we’ve gotta reach AGI by then!” This is only true, of course, if the difficulty of achieving AGI is below a certain threshold, and we don’t know what that threshold is.
It’s a kind of relative evidence. If there are 700 quintillion planets, that makes it more likely there are aliens than if there were only a few thousand planets. But I’m still clueless as to what the actual probability is, only that it’s higher than it would have been otherwise. Same with AGI.
Here’s an argument for short timelines that I take seriously:
Anthropic revenue has increased by 10x/yr for the last 3 years. At EOY 2025 it was 10B. Maybe it will keep increasing at this rate.
The revenue of the leading AI company will be between 100B/yr and 10T/yr when AGI is achieved. (Why not lower? Maybe but AGI this year seems unlikely. Why not higher? If one companies revenue is on the order of 10% of current wGDP, then the whole AI industry is probably 50-100% of current wGDP, which seems like you probably have AGI by then).
Therefore, AGI will be built sometime between EOY 2026 (when Ant hits 100B on current trends) and EOY 2028 (when Ant hits 10T on current trends).
I think I feel better about (2) then basically any other way of getting an anchor on when AGI will be built because it much directly tracks real world impacts of AI, whereas e.g. it seems really difficult to get any sort of confidence on what oom of effective flops or benchmark score corresponds to AGI.
(1) still seems dubious to me, I think revenue trends will probably slow. But I don’t know when and I could totally imagine them continuing straight to AGI.
(What exactly do I mean by AGI? I don’t think it matters much here, I think this argument goes through pretty well for all reasonable definitions of AGI but lets say I mean that AI R&D would slow down more if you removed the AIs than if you removed the humans).
Note that in order for Anthropic revenue to 10x this year, they’ll already have to increase $/FLOP (i.e. revenue per unit of compute. Profit margins basically.) To increase it another 10x the following year, they’ll probably need to triple $/FLOP, because their compute will only roughly triple next year. Ditto for 2028. All this is a reason to doubt premise 1 basically; in the past they’ve been able to grow revenue in large part by just allocating more of their compute to serving customers, but now they’ll have to charge customers more per FLOP.
According to their twitter, Anthropic revenue grew 3x in the first 3 months of 2026, which this comment ~implies would be unlikely
Well, I certainly was surprised! So, guilty as charged I guess? I still stand by the comment. What parts of it do you disagree with?
(Probably Hoagy is unlikely to comment due to being an Anthropic employee.)
I wonder if two other factors might also work against the 10x growth per year as a norm. One, standard S-curve growth patterns for start-up enterprises. What the time period for the high growth phase might be could be interesting—does rapid advancement of AI itself point to longer or shorter period of revenue growth? My quest is it might shorten it but have not really thought about that before.
The other is just opportunity costs. All that money is coming from somewhere, not thin air, and we have some real constraints on monetary velocity as well. So just how much of the money supply can AI revenue growth curves eat up?
Update: Semianalysis claims:
am i understanding correctly?
anthropic is growing by 10x per year
on this trend, they will soon have 10T/yr revenues
in order to have 10T/yr revenues, they will need to achieve agi
therefore, they will achieve agi.
this seems rather circular?
my puppy doubled in size over the past few weeks
on this trend, he will become larger than even clifford—known large red dog
in order to become larger than clifford, he will have to be some kind of mutant super-puppy
therefore he is a mutant super-puppy
With AI, there’s an obvious case for it being able to automate the whole economy (humans do everything in the economy, AI could in principle do everything that humans can do). Whereas the reference class of existing puppies strongly suggests that the puppy will stop growing.
I think correct counterarguments need to somehow dispute one of the premises—and it sounds like you are disputing (1). But I feel like you need some reasons to expect that (1) will be false. There are some (e.g. Daniel’s response above), and also reversion to AI industry as a whole trends.
yo, totally!
sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
to be clear, not my true beliefs.
ps: note that we can play the same game with more mundane technologies:
uber’s revenue is growing X% per year
therefore, their revenue will be Y within Z years.
in order for their revenue to be Y, half the world’s population must be driving uber.
therefore, within Z years, half the world’s population will be driving uber.
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
this is the the edit i am requesting, yes.
I also take this argument seriously.
One background fact some commenters are missing: it’s virtually unheard of for a tech startup to continue growing at 100% or more after it reaches $1 billion per year in revenue. A company growing at closer to 1000% per year at the multi-billion revenue level is wildly unprecedented. A company tripling its revenue in one quarter from a starting point of $10 billion, as Anthropic did in Q1, is even more wildly unprecedented than that.
Revenue growth has momentum, and it is essentially locked in that frontier LLMs will be a bigger business than the biggest tech industries (smartphones, internet advertising) are today.
These events are rare, but not unheard of. Zoom was doubling quarterly in 2021 for a short while at over a $1B run-rate. Moderna 2.5x’d in one quarter from a $7B runrate in 2021. (Both these cases show how fast revenue growth rates can collapse, albeit for different reasons—but note the common case of a shock driving revenue rapidly up).
FWIW, Nvidia continues to double yearly after hitting a $100B runrate.
I think those examples actually reinforce Josh’s point. NVIDIA growth is also from AI. Zoom and Moderna grew because of COVID creating a ginormous demand shock, yet even then, their growth at its peak was slower than Anthropic’s growth despite them starting from a smaller revenue level and therefore it being inherently easier for them to grow. So… unless you can dig up better examples, it seems like Anthropic’s Q1 2026 growth is literally the most impressive growth of any company in history? And this is despite the fact that there’s no COVID-equivalent for AI; there’s no unusual circumstance that created a huge temporary demand shock, instead, it’s just that they made their products better.
Notably, this argument also predicts Anthropic will have a strong lead over their competitors by EOY 2027 ($1 trillion in revenue vs a projected $250 billion for OpenAI, see here) and a decisive lead by EOY 2028 ($10 trillion for Anthropic vs $800 billion for OpenAI).
It also predicts that there would be huge economic returns to OpenAI selling their compute to Anthropic if this revenue growth happened while the compute growth trend of the respective companies matched projections.
I see a few posts like this anchoring AGI timelines to company revenue / GDP, most notably from economists. But I’d like to understand where this intuition comes from..It seems to me similar to the biological anchors or back in the day Kurzweilian anchor to FLOP/s.
For me, GDP anchors aren’t any more intuitive to me for AGI/ ASI any more than number of parameters or FLOP/s intuitions. Like I can totally imagine AI companies having revenue of ~10% of GDP (10T) without an AGI, even with current level AIs proliferating over the next 10 years.
Speaking for myself:
First of all, the implication “AGI --> Loads of revenue” does seem to hold. If one of these companies did get to AGI, they’d pretty quickly get to 1T, then 10T, then 100T ARR.
What about the implication “Loads of revenue --> AGI?” That’s trickier. But the basic intuition is that in order for a company like Anthropic to be making 10T ARR, they must be deploying Claude pretty much across the whole economy. Claude must be embedded in basically everything, and providing a lot of value too, otherwise people wouldn’t be paying 10% of world GDP for it. And it seems like a Claude capable enough to provide that much value to that many different diverse industries, would probably be AGI. If there was still some major skill/ability that it lacked, some major way in which humans were superior, then probably that deficiency would prevent it from making $10T ARR, by limiting it to certain industries or roles that don’t require that skill/ability.
The obvious potential limitation to me is robotics/skilled manual labor. Maybe I’m just misunderstanding something fundamental here, but it seems at least plausible to me that there will be significant fractions of skilled manual labor that’s not automated at the point that Claude’s 10% of world GDP (and AI in general is 20%+).
AGI → loads of revenue path makes sense to me
But I can imagine an AI that is the 99.9th percentile across some disciplines but not all. I’d assume we already spend ~10T for things like engineering talent, medical advice, legal etc.. and that seems like AI companies could make that much (given they can capture a lot of the excess value—assume there’s only a single AI lab and there’s no competition if you will). I can imagine something slightly better than today’s AI’s have that level of revenue after proliferating through the economy for another decade.
Even if it’s deficient in a bunch of other things we are good at (writing, comedy, physical labor, making better AI’s etc..) It seems to me you can get very far without all human skills, but just a subset of them.
I agree actually, that maybe AIs not too different from today’s could get to $10T after proliferating into the economy for another decade.
So perhaps Thomas’ argument should be revised to more specifically be about the next two years or so. If Anthropic or OAI make it to $10T by 2029, then that seems like something that couldn’t be achieved with just slightly better versions of current AIs. There just isn’t enough time to build all the products on top of it, transform all the industries, outcompete the dinosaurs, etc. Whereas if they actually do have a drop-in replacement for human professionals at everything, then yes they’d make it to $10T.
Does this type of logic work for past experience we’ve had with large economic shifts, such as industrial farming or the internet? For example, do tractors count as AGI for people living in the Middle Ages?
My thought as well. Since flops has limits on speed of growth, $/flop would need to grow quickly. Did $/watt grow very quickly as people found better uses for the energy and built out the complements to support that?
100B in revenue seems awfully low. For context, Walmart did 700B in revenue last year and Toyota did 330B. Neither company is exactly close to AGI. 100B is like 0.1% of wGDP. Its a lot but its hard to draw a line from that to AGI. I think 1T minimum for this kind of argument and I think closer to 10T for this line of reasoning.
I think the Walmart and Toyota case is less interesting because they’re not creating “new” consumption. Like Walmart has a huge revenue because it’s captured a big slice of people’s overall consumption. If Walmart’s revenue doubled next year, it’ll probably because they got a bigger slice, not because people are suddenly buying twice as much stuff.
Continuation of this trend already requires some form of TAI. The method of how AI systems generate value has to radically change. Otherwise who would pay so much money for them?
It’s kinda like making similar argument about parameter numbers and saying “and if it’s more than … parameters, it means Earth surface is all computronium, so obviously AGI was achieved”.
I already mostly believe in the logical implication “no AGI → break of trends”, so “no break of trends → AGI” is not an additional argument.
There used to be a lot of arguments about AI Timelines 5+ years ago of the sort ’if AI is coming why are the markets not reacting”. We’re now on the other side—by already being within the time horizon that markets react to—where the markets themselves are pointing in the directon of AGI, and people instead wonder how to undercount that (e.g. by saying it is a bubble, or that trends must slow).
The markets aren’t pointing in the direction of transformative AI (long-term bond yields, etc.).
They are pointing in the direction of AI being very significant in the economy.
This reminds me of the argument people make for the existence of life on other planets. “Sure, the chances of life on any given planet may be small, but with such a large number of planets, there’s gotta be life on one of them!”
But if there’s 700 quintillion planets, that fact alone tells you nothing. You’d also have to know that the chance of life occurring on any given planet is at least close to one in 700 quintillion, which we don’t in fact know and have no good way of estimating.
I feel your argument has a similar shape. “If we’re spending that much money on AI, then we’ve gotta reach AGI by then!” This is only true, of course, if the difficulty of achieving AGI is below a certain threshold, and we don’t know what that threshold is.
It’s a kind of relative evidence. If there are 700 quintillion planets, that makes it more likely there are aliens than if there were only a few thousand planets. But I’m still clueless as to what the actual probability is, only that it’s higher than it would have been otherwise. Same with AGI.