I basically agree with this whole post. I used to think there were double-digit % chances of AGI in each of 2024 and 2025 and 2026, but now I’m more optimistic, it seems like “Just redirect existing resources and effort to scale up RL on agentic SWE” is now unlikely to be sufficient (whereas in the past we didn’t have trends to extrapolate and we had some scary big jumps like o3 to digest)
I still think there’s some juice left in that hypothesis though. Consider how in 2020, one might have thought “Now they’ll just fine-tune these models to be chatbots and it’ll become a mass consumer product” and then in mid-2022 various smart people I know were like “huh, that hasn’t happened yet, maybe LLMs are hitting a wall after all” but it turns out it just took till late 2022/early 2023 for the kinks to be worked out enough.
Also, we should have some credence on new breakthroughs e.g. neuralese, online learning, whatever. Maybe like 8%/yr? Of a breakthrough that would lead to superhuman coders within a year or two, after being appropriately scaled up and tinkered with.
Re neuralese/online or continual learning or long-term memory that isn’t solely a context window breakthrough, I’m much more skeptical of it being very easy to integrate breakthroughs on short timelines, because it’s likely that changes will have to be made to the architecture that aren’t easy to do very quickly.
The potential for breakthroughs combined with the fact that Moore’s law will continue, making lots of compute cheap for researchers is a reason I think that my median timelines aren’t in the latter half of the century, but I think that it’s much more implausible to get it working very soon, so I’m much closer to 0.3% a year from 2025-2027.
@Mo Putera@the gears to ascension take the Moore’s law will continue point as a prediction that new paradigms like memristors will launch new S-curves of efficiency until we reach the Landauer Limit, which is 6.5 OOMs away, and that the current paradigm has 200x more efficiency savings to go:
I basically agree with this whole post. I used to think there were double-digit % chances of AGI in each of 2024 and 2025 and 2026, but now I’m more optimistic, it seems like “Just redirect existing resources and effort to scale up RL on agentic SWE” is now unlikely to be sufficient (whereas in the past we didn’t have trends to extrapolate and we had some scary big jumps like o3 to digest)
I still think there’s some juice left in that hypothesis though. Consider how in 2020, one might have thought “Now they’ll just fine-tune these models to be chatbots and it’ll become a mass consumer product” and then in mid-2022 various smart people I know were like “huh, that hasn’t happened yet, maybe LLMs are hitting a wall after all” but it turns out it just took till late 2022/early 2023 for the kinks to be worked out enough.
Also, we should have some credence on new breakthroughs e.g. neuralese, online learning, whatever. Maybe like 8%/yr? Of a breakthrough that would lead to superhuman coders within a year or two, after being appropriately scaled up and tinkered with.
Re neuralese/online or continual learning or long-term memory that isn’t solely a context window breakthrough, I’m much more skeptical of it being very easy to integrate breakthroughs on short timelines, because it’s likely that changes will have to be made to the architecture that aren’t easy to do very quickly.
The potential for breakthroughs combined with the fact that Moore’s law will continue, making lots of compute cheap for researchers is a reason I think that my median timelines aren’t in the latter half of the century, but I think that it’s much more implausible to get it working very soon, so I’m much closer to 0.3% a year from 2025-2027.
@Mo Putera @the gears to ascension take the Moore’s law will continue point as a prediction that new paradigms like memristors will launch new S-curves of efficiency until we reach the Landauer Limit, which is 6.5 OOMs away, and that the current paradigm has 200x more efficiency savings to go:
https://www.forethought.org/research/how-far-can-ai-progress-before-hitting-effective-physical-limits#chip-technology-progress