I suspect that you’re right, but that this isn’t as comforting as might be expected in the event of AGI.
The flexibility and capacity of human brains to learn skills and acquire knowledge has allowed us to shorten the timelines for developing skills and knowledge for members of our species by many orders of magnitude. Instincts honed by evolution take a long time to develop further. Knowledge acquired from literacy and mass distribution takes a lot less, but still limited by the couple of decades it takes for children to start with a blank slate and learn through to adulthood so that they (hopefully) have enough skills to function in society. Some few of them can learn new things to add to and refine the collection. Then they die, only a few decades after building up to basic competence, and any skills and knowledge not explicitly passed on is lost. Still, by the standards of evolution this is lightning fast.
The development of AGI promises that some entities will be capable of learning much faster than we do, likely in months at most instead of decades. Later versions may be able to learn much faster still. They will be able to be copied in a trained state and the copies learning further from there without inevitable death imposing an upper bound.
This will close a completely new positive feedback loop, even without considering recursive self-improvement of underlying software or hardware. We should expect that closing additional—and faster—feedback loops in knowledge and skill acquisition to lead to unpredictable capabilities, beyond anything previously seen. The likelihood of additional improvements in underlying software and hardware just add two more new positive feedback loops in capability gain.
I suspect that you’re right, but that this isn’t as comforting as might be expected in the event of AGI.
The flexibility and capacity of human brains to learn skills and acquire knowledge has allowed us to shorten the timelines for developing skills and knowledge for members of our species by many orders of magnitude. Instincts honed by evolution take a long time to develop further. Knowledge acquired from literacy and mass distribution takes a lot less, but still limited by the couple of decades it takes for children to start with a blank slate and learn through to adulthood so that they (hopefully) have enough skills to function in society. Some few of them can learn new things to add to and refine the collection. Then they die, only a few decades after building up to basic competence, and any skills and knowledge not explicitly passed on is lost. Still, by the standards of evolution this is lightning fast.
The development of AGI promises that some entities will be capable of learning much faster than we do, likely in months at most instead of decades. Later versions may be able to learn much faster still. They will be able to be copied in a trained state and the copies learning further from there without inevitable death imposing an upper bound.
This will close a completely new positive feedback loop, even without considering recursive self-improvement of underlying software or hardware. We should expect that closing additional—and faster—feedback loops in knowledge and skill acquisition to lead to unpredictable capabilities, beyond anything previously seen. The likelihood of additional improvements in underlying software and hardware just add two more new positive feedback loops in capability gain.