But importantly we don’t currently know how to do that, if it’s even possible without involving ASIs, or making use of detailed low-level models of a particular brain, or requiring hundreds of subjective years to achieve substantial results, or even more than one of these at once.
This has the shape of a worry I have about immediate feasibility of LLM AGIs (before RL and friends recycle the atoms). They lack automatic access to skills for agentic autonomous operation, so the first analogy is with stroke victims. What needs to happen for them to turn AGIs is a recovery program, teaching of basic agency skills and their activation at appropriate times. But if LLMs are functionally more like superhumanly erudite low-IQ humans, figuring out how to teach them the use of those skills might be too difficult, and won’t be immediately useful for converting compute to research even if successful.
But importantly we don’t currently know how to do that, if it’s even possible without involving ASIs, or making use of detailed low-level models of a particular brain, or requiring hundreds of subjective years to achieve substantial results, or even more than one of these at once.
This has the shape of a worry I have about immediate feasibility of LLM AGIs (before RL and friends recycle the atoms). They lack automatic access to skills for agentic autonomous operation, so the first analogy is with stroke victims. What needs to happen for them to turn AGIs is a recovery program, teaching of basic agency skills and their activation at appropriate times. But if LLMs are functionally more like superhumanly erudite low-IQ humans, figuring out how to teach them the use of those skills might be too difficult, and won’t be immediately useful for converting compute to research even if successful.