LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.
I see, so the theory is that we are bottleneck Ed on testing old and new ideas, not having the right new ideas
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.