That does raise my eyebrows a bit, but also, note that we currently have hundreds of top-level researchers at AGI labs tirelessly working day in and day out, and that all that activity results in a… fairly leisurely pace of progress, actually.[1]
Recall that what they’re doing there is blind atheoretical empirical tinkering (tons of parallel experiments most of which are dead ends/eke out scant few bits of useful information). If you take that research paradigm and ramp it up to superhuman levels (without changing the fundamental nature of the work), maybe it really would take this many researcher-years.
And if AI R&D automation is actually achieved on the back of sleepwalking LLMs, that scenario does seem plausible. These superhuman AI researchers wouldn’t actually be generally superhuman researchers, just superhuman at all the tasks in the blind-empirical-tinkering research paradigm. Which has steeply declining returns to more intelligence added.
That said, yeah, if LLMs actually scale to a “lucid” AGI, capable of pivoting to paradigms with better capability returns on intelligent work invested, I expect it to take dramatically less time.
It’s fast if you use past AI progress as the reference class, but is decidedly not fast if you try to estimate “absolute” progress. Like, this isn’t happening, we’ve jumped to near human-baseline and slowed to a crawl at this level. If we assume the human level is the ground and we’re trying to reach the Sun, it in fact might take millennia at this pace.
we’ve jumped to near human-baseline and slowed to a crawl at this level
A possible reason for that might be the fallibility of our benchmarks. It might be the case that for complex tasks, it’s hard for humans to see farther than their nose.
That does raise my eyebrows a bit, but also, note that we currently have hundreds of top-level researchers at AGI labs tirelessly working day in and day out, and that all that activity results in a… fairly leisurely pace of progress, actually.[1]
Recall that what they’re doing there is blind atheoretical empirical tinkering (tons of parallel experiments most of which are dead ends/eke out scant few bits of useful information). If you take that research paradigm and ramp it up to superhuman levels (without changing the fundamental nature of the work), maybe it really would take this many researcher-years.
And if AI R&D automation is actually achieved on the back of sleepwalking LLMs, that scenario does seem plausible. These superhuman AI researchers wouldn’t actually be generally superhuman researchers, just superhuman at all the tasks in the blind-empirical-tinkering research paradigm. Which has steeply declining returns to more intelligence added.
That said, yeah, if LLMs actually scale to a “lucid” AGI, capable of pivoting to paradigms with better capability returns on intelligent work invested, I expect it to take dramatically less time.
It’s fast if you use past AI progress as the reference class, but is decidedly not fast if you try to estimate “absolute” progress. Like, this isn’t happening, we’ve jumped to near human-baseline and slowed to a crawl at this level. If we assume the human level is the ground and we’re trying to reach the Sun, it in fact might take millennia at this pace.
A possible reason for that might be the fallibility of our benchmarks. It might be the case that for complex tasks, it’s hard for humans to see farther than their nose.