There is a dis-analogy, in the former case you have a single goal, get good at chess. In the latter case there are many goals we want AIs to do, ranging from coding to running scientific experiments to curing diseases and even making art. Obviously if you want a generalist you will want to teach general skills.
Secondly, a big reason labs are focusing on ML research is to get on the recursive-self-improvement super-exponential curve.
Your analogy addresses neither of these points, and I do think that these points are the primary reasons why people are trying to get AIs to do well at ML research. Therefore I think your analogy is bad and you should not make inferences or plans using this logic.
There is a dis-analogy, in the former case you have a single goal, get good at chess. In the latter case there are many goals we want AIs to do, ranging from coding to running scientific experiments to curing diseases and even making art. Obviously if you want a generalist you will want to teach general skills.
Secondly, a big reason labs are focusing on ML research is to get on the recursive-self-improvement super-exponential curve.
Your analogy addresses neither of these points, and I do think that these points are the primary reasons why people are trying to get AIs to do well at ML research. Therefore I think your analogy is bad and you should not make inferences or plans using this logic.