Upgrading a primate didn’t make it strongly superintelligent relative to other primates. The upgrades made us capable of recursively improving our social networking; that was what made the difference.
If you raised a child as an ape, you’d get an ape. That we seem so different now is due to the network effects looping back and upgrading our software.
Yeah, there were important changes. I’m suggesting that most of their long-term impact came from enabling the bootstrapping process. Consider the (admittedly disputed) time lag between anatomical and behavioral modernity and the further accelerations that have happened since.
ETA: If you could raise an ape as a child, that variety of ape would’ve taken off.
I suspect that you’re right, but that this isn’t as comforting as might be expected in the event of AGI.
The flexibility and capacity of human brains to learn skills and acquire knowledge has allowed us to shorten the timelines for developing skills and knowledge for members of our species by many orders of magnitude. Instincts honed by evolution take a long time to develop further. Knowledge acquired from literacy and mass distribution takes a lot less, but still limited by the couple of decades it takes for children to start with a blank slate and learn through to adulthood so that they (hopefully) have enough skills to function in society. Some few of them can learn new things to add to and refine the collection. Then they die, only a few decades after building up to basic competence, and any skills and knowledge not explicitly passed on is lost. Still, by the standards of evolution this is lightning fast.
The development of AGI promises that some entities will be capable of learning much faster than we do, likely in months at most instead of decades. Later versions may be able to learn much faster still. They will be able to be copied in a trained state and the copies learning further from there without inevitable death imposing an upper bound.
This will close a completely new positive feedback loop, even without considering recursive self-improvement of underlying software or hardware. We should expect that closing additional—and faster—feedback loops in knowledge and skill acquisition to lead to unpredictable capabilities, beyond anything previously seen. The likelihood of additional improvements in underlying software and hardware just add two more new positive feedback loops in capability gain.
This may be a case of a threshold effect. I’m not sure what your definition of “strongly superintelligent relative to” is, but unlocking language and social networking and planning is definitely a superpower. And one that would be hard to predict until it started to appear.
It’s quite possible that the next jump in ability to optimize the future by application of models (intelligence) is small by some measures, but large in impact due to some capability we currently undervalue.
Upgrading a primate didn’t make it strongly superintelligent relative to other primates. The upgrades made us capable of recursively improving our social networking; that was what made the difference.
If you raised a child as an ape, you’d get an ape. That we seem so different now is due to the network effects looping back and upgrading our software.
If you raise an ape as a child, you don’t get a child. You just get an ape.
Yeah, there were important changes. I’m suggesting that most of their long-term impact came from enabling the bootstrapping process. Consider the (admittedly disputed) time lag between anatomical and behavioral modernity and the further accelerations that have happened since.
ETA: If you could raise an ape as a child, that variety of ape would’ve taken off.
I suspect that you’re right, but that this isn’t as comforting as might be expected in the event of AGI.
The flexibility and capacity of human brains to learn skills and acquire knowledge has allowed us to shorten the timelines for developing skills and knowledge for members of our species by many orders of magnitude. Instincts honed by evolution take a long time to develop further. Knowledge acquired from literacy and mass distribution takes a lot less, but still limited by the couple of decades it takes for children to start with a blank slate and learn through to adulthood so that they (hopefully) have enough skills to function in society. Some few of them can learn new things to add to and refine the collection. Then they die, only a few decades after building up to basic competence, and any skills and knowledge not explicitly passed on is lost. Still, by the standards of evolution this is lightning fast.
The development of AGI promises that some entities will be capable of learning much faster than we do, likely in months at most instead of decades. Later versions may be able to learn much faster still. They will be able to be copied in a trained state and the copies learning further from there without inevitable death imposing an upper bound.
This will close a completely new positive feedback loop, even without considering recursive self-improvement of underlying software or hardware. We should expect that closing additional—and faster—feedback loops in knowledge and skill acquisition to lead to unpredictable capabilities, beyond anything previously seen. The likelihood of additional improvements in underlying software and hardware just add two more new positive feedback loops in capability gain.
This may be a case of a threshold effect. I’m not sure what your definition of “strongly superintelligent relative to” is, but unlocking language and social networking and planning is definitely a superpower. And one that would be hard to predict until it started to appear.
It’s quite possible that the next jump in ability to optimize the future by application of models (intelligence) is small by some measures, but large in impact due to some capability we currently undervalue.