Each time you can also apply this argument in reverse: I don’t like X about my city, so I’m happy that in the figure, the company will relocate me to NYC. And since NYC is presumed to be overall better, there are more instances of the latter rather than the former.
It seems to me you are taking the argument seriously, but very selectively.
(I think both kinds of thoughts pretty often, and I’m overall happy about the incoming move).
I think the case for evolution is a bit stronger than you admit here (ie significantly more than 4%).
It operated on a much larger ladder of cognitive power levels. Human learning is only implemented in humans who do not differ significantly in cognitive abilities. Evolution is thus closer to designing a quick-takeoff ASI as-superior-to-humans-as-humans-to-ants in comparison (leaving aside whether that scenario is plausible), and in understanding differences between cognitive level abilities by analogy with different animals.
Human learning by itself isn’t able to improve abilities that much—at least having much lower ceiling than evolution.
Qualitative advances in ML methods (such as adding RL on top of SFT—I’m not counting eg neural architecture search as most architectures at scale behave similarly) significantly change the systems behaviour, and such changes are closer to evolutionary change.