The revenue of the leading AI company will be between 100B/yr and 10T/yr when AGI is achieved. (Why not lower? Maybe but AGI this year seems unlikely. Why not higher? If one companies revenue is on the order of 10% of current wGDP, then the whole AI industry is probably 50-100% of current wGDP, which seems like you probably have AGI by then).
am i understanding correctly?
anthropic is growing by 10x per year
on this trend, they will soon have 10T/yr revenues
in order to have 10T/yr revenues, they will need to achieve agi
therefore, they will achieve agi.
this seems rather circular?
my puppy doubled in size over the past few weeks
on this trend, he will become larger than even clifford—known large red dog
in order to become larger than clifford, he will have to be some kind of mutant super-puppy
With AI, there’s an obvious case for it being able to automate the whole economy (humans do everything in the economy, AI could in principle do everything that humans can do). Whereas the reference class of existing puppies strongly suggests that the puppy will stop growing.
I think correct counterarguments need to somehow dispute one of the premises—and it sounds like you are disputing (1). But I feel like you need some reasons to expect that (1) will be false. There are some (e.g. Daniel’s response above), and also reversion to AI industry as a whole trends.
sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
am i understanding correctly?
anthropic is growing by 10x per year
on this trend, they will soon have 10T/yr revenues
in order to have 10T/yr revenues, they will need to achieve agi
therefore, they will achieve agi.
this seems rather circular?
my puppy doubled in size over the past few weeks
on this trend, he will become larger than even clifford—known large red dog
in order to become larger than clifford, he will have to be some kind of mutant super-puppy
therefore he is a mutant super-puppy
With AI, there’s an obvious case for it being able to automate the whole economy (humans do everything in the economy, AI could in principle do everything that humans can do). Whereas the reference class of existing puppies strongly suggests that the puppy will stop growing.
I think correct counterarguments need to somehow dispute one of the premises—and it sounds like you are disputing (1). But I feel like you need some reasons to expect that (1) will be false. There are some (e.g. Daniel’s response above), and also reversion to AI industry as a whole trends.
yo, totally!
sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
to be clear, not my true beliefs.
ps: note that we can play the same game with more mundane technologies:
uber’s revenue is growing X% per year
therefore, their revenue will be Y within Z years.
in order for their revenue to be Y, half the world’s population must be driving uber.
therefore, within Z years, half the world’s population will be driving uber.
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
this is the the edit i am requesting, yes.