There’s a kind of paradox in all of these “straight line” extrapolation arguments for AI progress as your timelines assume (e.g., the argument for superhuman coding agents based on the rate of progress in the METR report).
One could extrapolate many different straight lines on graphs in the world right now (GDP, scientific progress, energy consumption, etc.). If we do create transformative AI within the next few years, then all of those straight lines will suddenly hit an inflection point. So, to believe in the straight line extrapolation of the AI line, you must also believe that almost no other straight lines will stay that way.
This seems to be the gut-level disagreement between those who feel the AGI and those who don’t; the disbelievers don’t buy that the AI line is straight and thus all the others aren’t.
I don’t know who’s right and who’s wrong in this debate, but the method of reasoning here reminds me of the viral tweet: “My 3-month-old son is now TWICE as big as when he was born.
He’s on track to weigh 7.5 trillion pounds by age 10.” It could be true, but I have a fairly strong prior from nearly every other context that growth/progress tends to bend down into an S-curve at one point or another, and so these forecasts seems deeply suspect to me unless there’s some kind of better reason to suspect that trends will continue along the same path.
There is no infinite growth in nature. Everything will hit a ceiling at some point. So I agree that the intelligence explosion will eventually take a sigmoid shape as it approaches the physical limits. However I think the physical limits are far of. While we will get diminishing returns for each individual technology, we will also shift to a new technology each time. It might slow down when the Earth has been transformed into a super computer, as interplanetary communication naturally slows down processing speed. But my guess is that this will happen long after the scenario described here.
There’s a kind of paradox in all of these “straight line” extrapolation arguments for AI progress as your timelines assume (e.g., the argument for superhuman coding agents based on the rate of progress in the METR report).
One could extrapolate many different straight lines on graphs in the world right now (GDP, scientific progress, energy consumption, etc.). If we do create transformative AI within the next few years, then all of those straight lines will suddenly hit an inflection point. So, to believe in the straight line extrapolation of the AI line, you must also believe that almost no other straight lines will stay that way.
This seems to be the gut-level disagreement between those who feel the AGI and those who don’t; the disbelievers don’t buy that the AI line is straight and thus all the others aren’t.
I don’t know who’s right and who’s wrong in this debate, but the method of reasoning here reminds me of the viral tweet: “My 3-month-old son is now TWICE as big as when he was born. He’s on track to weigh 7.5 trillion pounds by age 10.” It could be true, but I have a fairly strong prior from nearly every other context that growth/progress tends to bend down into an S-curve at one point or another, and so these forecasts seems deeply suspect to me unless there’s some kind of better reason to suspect that trends will continue along the same path.
There is no infinite growth in nature. Everything will hit a ceiling at some point. So I agree that the intelligence explosion will eventually take a sigmoid shape as it approaches the physical limits. However I think the physical limits are far of. While we will get diminishing returns for each individual technology, we will also shift to a new technology each time. It might slow down when the Earth has been transformed into a super computer, as interplanetary communication naturally slows down processing speed. But my guess is that this will happen long after the scenario described here.