Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
“(Low quality opinion post / feel free to skip)
Now that AGI isn’t cool anymore, I’d like to register the opposing position.
- AGI is coming in 2026, more likely than not
- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They’re not sufficient for AGI. My prediction stands regardless.
- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we’d expect GPT-6 to be.
- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)
- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on “which capabilities computers are unlocking” and “how much this is augmenting our own productivity”, and the relevant feedback loop becomes much clearer.”
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.
Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
“(Low quality opinion post / feel free to skip)
Now that AGI isn’t cool anymore, I’d like to register the opposing position.
- AGI is coming in 2026, more likely than not
- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They’re not sufficient for AGI. My prediction stands regardless.
- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we’d expect GPT-6 to be.
- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)
- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on “which capabilities computers are unlocking” and “how much this is augmenting our own productivity”, and the relevant feedback loop becomes much clearer.”
https://x.com/VictorTaelin/status/1979852849384444347
Seems like the first two points contradict each other. How can an llm not be good at discovery and also automate human R&D
They can automate it by quick search of already published ideas and quick writing code to testing new ideas.
I see, so the theory is that we are bottleneck Ed on testing old and new ideas, not having the right new ideas
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.