sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
yo, totally!
sorry, i didn’t mean my comment to reject the conclusion of your post. obviously we can argue agi on its own merits—the puppy is not a valid analogy for exactly the reason you specify.
however—speaking narrowly about the quoted passage—i find this move very suspicious:
the only way for B to happen is for A to happen first
we can see that B will happen
therefore A will happen first.
this is valid, as much as we accept the premises. but it seems disingenuous to me. any plausible narrative we have for B happening has to route first through A happening. we can interpret reasoning-under-uncertainty as a kind of “path counting” game—we are counting “potential futures” according to some measure. but any path through B must necessarily pass through A, by assumption! so any story that we tell about why B will happen is implicitly a story where A happens.
so we can’t count evidence for B as separate evidence for A. any probability we assign to B already has A baked in as an assumption.
if i say[1] “agi is 20 years away”, and you reply “it’s only three years away: look at how close anthropic is to [developing agi and] controlling the world economy”—this is not going to be convincing to me, right? we have reached different odds about how likely agi is in the next three years. and so we will also reach different odds about how likely it is that anthropic controls the world economy in that time frame.
any evidence you have that anthropic will control the world economy must also be evidence that they will develop agi. there’s just no world in which the former but not the latter. so just say that evidence, then![2]
to be clear, not my true beliefs.
ps: note that we can play the same game with more mundane technologies:
uber’s revenue is growing X% per year
therefore, their revenue will be Y within Z years.
in order for their revenue to be Y, half the world’s population must be driving uber.
therefore, within Z years, half the world’s population will be driving uber.
The argument is not really about “AGI happening”. It is about the speed of improvement of which Anthropic’s revenue growth is a measure. What is circular is not the argument, it is the definition of AGI. If you taboo “AGI” you are left with “at current revenue growth Claude will take over huge parts of the economy in the next couple of years”. Which is really all Thomas was saying.
There is not really any problem with the structure of the argument, just with the term AGI.
this is the the edit i am requesting, yes.