But… the success of LLMs is the only reason people have super short timelines! That’s why we’re all worried about them, and in particular if they can soon invent a better paradigm—which, yes, may be more efficient and dangerous than LLMs, but presumably requires them to pass human researcher level FIRST, maybe signficantly.
If you don’t believe LLMs will scale to AGI, I see no compelling reason to expect another paradigm which is much better to be discovered in the next 5 or 10 years. Neuroscience is a pretty old field! They haven’t figured out rhe brain’s core algorithm for intelligence yet, if that’s even a thing. Just because LLMs displayed some intelligent behavior before fizzling (in this hypothetical) doesn’t mean that we’re necessarily one simple insight away. So that’s a big sigh of relief, actually.
I see no compelling reason to expect another paradigm which is much better to be discovered in the next 5 or 10 years.
One compelling reason to expect the next 5 to 10 years independent of LLMs is that compute has just recently gotten cheap enough that you can relatively cheaply afford to do training runs that use as much compute as humans use (roughly speaking) in a lifetime. Right now, doing 3e23 FLOP (perhaps roughly human lifetime FLOP) costs roughly $200k and we should expect that in 5 years it only costs around $30k.
So if you thought we might achieve AGI around the point when compute gets cheap enough to do lots of experiments with around human level compute and training runs of substantially larger scale, this is now achievable. To put this another way, most of the probability mass of the “lifetime anchor” from the bio anchors report rests in the next 10 years.
More generally, we’ll be scaling through a large number of orders of magnitude of compute (including spent on things other than LLMs potentially) and investing much more in AI research.
I don’t think these reasons on their own should get you above ~25% within the next 10 years, but this in combination with LLMs feels substantial to me (especially because a new paradigm could build on LLMs even if LLMs don’t suffice).
Presumably you should put some weight on both perspectives, though I put less weight on needing as much compute as evolution because evolution seems insanely inefficient.
That’s why I specified “close on a log scale.” Evolution may be very inefficient, but it also has access to MUCH more data than a single lifetime.
Yes, we should put some weight on both perspectives. What I’m worried about here is this trend where everyone seems to expect AGI in a decade or so even if the current wave of progress fizzles—I think that is a cached belief. We should be prepared to update.
I don’t expect AGI in a decade or so even if the current wave of progress fizzles. I’d put around 20% over the next decade if progress fizzles (it depends on the nature of the fizzle), which is what I was arguing for.
I’m saying we should put some weight on possibilities near lifetime level compute (in log space) and some weight on possibilities near evolution level compute (in log space).
I suspect this is why many people’s P(Doom) is still under 50% - not so much that ASI probably won’t destroy us, but simply that we won’t get to ASI at all any time soon. Although I’ve seen P(Doom) given a standard time range of the next 100 years, which is a rather long time! But I still suspect some are thinking directly about the recent future and LLMs without extrapolating too much beyond that.
But… the success of LLMs is the only reason people have super short timelines! That’s why we’re all worried about them, and in particular if they can soon invent a better paradigm—which, yes, may be more efficient and dangerous than LLMs, but presumably requires them to pass human researcher level FIRST, maybe signficantly.
If you don’t believe LLMs will scale to AGI, I see no compelling reason to expect another paradigm which is much better to be discovered in the next 5 or 10 years. Neuroscience is a pretty old field! They haven’t figured out rhe brain’s core algorithm for intelligence yet, if that’s even a thing. Just because LLMs displayed some intelligent behavior before fizzling (in this hypothetical) doesn’t mean that we’re necessarily one simple insight away. So that’s a big sigh of relief, actually.
One compelling reason to expect the next 5 to 10 years independent of LLMs is that compute has just recently gotten cheap enough that you can relatively cheaply afford to do training runs that use as much compute as humans use (roughly speaking) in a lifetime. Right now, doing 3e23 FLOP (perhaps roughly human lifetime FLOP) costs roughly $200k and we should expect that in 5 years it only costs around $30k.
So if you thought we might achieve AGI around the point when compute gets cheap enough to do lots of experiments with around human level compute and training runs of substantially larger scale, this is now achievable. To put this another way, most of the probability mass of the “lifetime anchor” from the bio anchors report rests in the next 10 years.
More generally, we’ll be scaling through a large number of orders of magnitude of compute (including spent on things other than LLMs potentially) and investing much more in AI research.
I don’t think these reasons on their own should get you above ~25% within the next 10 years, but this in combination with LLMs feels substantial to me (especially because a new paradigm could build on LLMs even if LLMs don’t suffice).
Seems plausible, but not compelling.
Why one human lifetime and not somewhere closer to evolutionary time on log scale?
Presumably you should put some weight on both perspectives, though I put less weight on needing as much compute as evolution because evolution seems insanely inefficient.
That’s why I specified “close on a log scale.” Evolution may be very inefficient, but it also has access to MUCH more data than a single lifetime.
Yes, we should put some weight on both perspectives. What I’m worried about here is this trend where everyone seems to expect AGI in a decade or so even if the current wave of progress fizzles—I think that is a cached belief. We should be prepared to update.
I don’t expect AGI in a decade or so even if the current wave of progress fizzles. I’d put around 20% over the next decade if progress fizzles (it depends on the nature of the fizzle), which is what I was arguing for.
I’m saying we should put some weight on possibilities near lifetime level compute (in log space) and some weight on possibilities near evolution level compute (in log space).
I’m not sure we disagree then.
I suspect this is why many people’s P(Doom) is still under 50% - not so much that ASI probably won’t destroy us, but simply that we won’t get to ASI at all any time soon. Although I’ve seen P(Doom) given a standard time range of the next 100 years, which is a rather long time! But I still suspect some are thinking directly about the recent future and LLMs without extrapolating too much beyond that.