I think the case for evolution is a bit stronger than you admit here (ie significantly more than 4%).
It operated on a much larger ladder of cognitive power levels. Human learning is only implemented in humans who do not differ significantly in cognitive abilities. Evolution is thus closer to designing a quick-takeoff ASI as-superior-to-humans-as-humans-to-ants in comparison (leaving aside whether that scenario is plausible), and in understanding differences between cognitive level abilities by analogy with different animals.
Human learning by itself isn’t able to improve abilities that much—at least having much lower ceiling than evolution.
Qualitative advances in ML methods (such as adding RL on top of SFT—I’m not counting eg neural architecture search as most architectures at scale behave similarly) significantly change the systems behaviour, and such changes are closer to evolutionary change.
I think the case for evolution is a bit stronger than you admit here (ie significantly more than 4%).
It operated on a much larger ladder of cognitive power levels. Human learning is only implemented in humans who do not differ significantly in cognitive abilities. Evolution is thus closer to designing a quick-takeoff ASI as-superior-to-humans-as-humans-to-ants in comparison (leaving aside whether that scenario is plausible), and in understanding differences between cognitive level abilities by analogy with different animals.
Human learning by itself isn’t able to improve abilities that much—at least having much lower ceiling than evolution.
Qualitative advances in ML methods (such as adding RL on top of SFT—I’m not counting eg neural architecture search as most architectures at scale behave similarly) significantly change the systems behaviour, and such changes are closer to evolutionary change.