LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.
Seems like the first two points contradict each other. How can an llm not be good at discovery and also automate human R&D
They can automate it by quick search of already published ideas and quick writing code to testing new ideas.
I see, so the theory is that we are bottleneck Ed on testing old and new ideas, not having the right new ideas
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find “true AGI”, whatever it is. We also have enough compute to test most ideas.
But one AGI’s feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.