I take from this comment that you do not see “AI winning the gold medal” as a good predictor of superintelligence arriving soon as much as I do.
I agree with the A/B < C/D part but may disagree with the “<<”. LLMs already display common sense. LLMs already generalize pretty well. Verifying whether a given game design is good is mostly a matter of common sense + reasoning. Finding a good game design given you know how to verify it is a matter of search.
A expect an AI that is good at both reasoning and search (as it has to be to win the IMO gold medal) to be quite capable of mechanism design as well, once it also knows how to connect common sense to reasoning + search. I don’t expect this to be trivial, but I do expect it to depend more on training data than on architecture.
Edit: by “training data” here I mostly mean “experience and feedback from multiple tasks” in a reinforcement learning sense, rather than more “passive” supervised learning.
I take from this comment that you do not see “AI winning the gold medal” as a good predictor of superintelligence arriving soon as much as I do.
I agree with the A/B < C/D part but may disagree with the “<<”. LLMs already display common sense. LLMs already generalize pretty well. Verifying whether a given game design is good is mostly a matter of common sense + reasoning. Finding a good game design given you know how to verify it is a matter of search.
A expect an AI that is good at both reasoning and search (as it has to be to win the IMO gold medal) to be quite capable of mechanism design as well, once it also knows how to connect common sense to reasoning + search. I don’t expect this to be trivial, but I do expect it to depend more on training data than on architecture.
Edit: by “training data” here I mostly mean “experience and feedback from multiple tasks” in a reinforcement learning sense, rather than more “passive” supervised learning.