Your title seems clickbaity, since its question is answered no in the post, and this article would have been more surprising had you answered yes. (And my expectation was that if you ask that question in the title, you don’t know the answer anymore.)
having implicit access to categorisation modules that themselves are valid only in typical situations… is not a way to generalise well
How do you know this? Should we turn this into one of those concrete ML experiments?
Chessmasters didn’t easily program chess programs; and those chess programs didn’t generalise to games in general.
I’d say a more relevant analogy is whether some ML algorithm could learn to play Go teaching games against a master, by example of a master playing teaching games against a student, without knowing what Go is.
And whether those programs could then perform well if their opponent forces them into a very unusual situation, such as would not have ever appeared in a chessmaster game.
If I sacrifice a knight for no advantage whatsoever, will the opponent be able to deal with that? What if I set up a trap to capture a piece, relying on my opponent not seeing the trap? A chessmaster playing another chessmaster would never play a simple trap, as it would never succeed; so would the ML be able to deal with it?
Your title seems clickbaity, since its question is answered no in the post, and this article would have been more surprising had you answered yes. (And my expectation was that if you ask that question in the title, you don’t know the answer anymore.)
How do you know this? Should we turn this into one of those concrete ML experiments?
PS: the other title I considered was “Why do people feel my result is wrong”, which felt too condescending.
Hehe—I don’t normally do this, but I feel I can indulge once ^_^
Moravec’s paradox again. Chessmasters didn’t easily program chess programs; and those chess programs didn’t generalise to games in general.
That would be good. I’m aiming to have a lot more practical experiments from my research project, and this could be one of them.
I’d say a more relevant analogy is whether some ML algorithm could learn to play Go teaching games against a master, by example of a master playing teaching games against a student, without knowing what Go is.
And whether those programs could then perform well if their opponent forces them into a very unusual situation, such as would not have ever appeared in a chessmaster game.
If I sacrifice a knight for no advantage whatsoever, will the opponent be able to deal with that? What if I set up a trap to capture a piece, relying on my opponent not seeing the trap? A chessmaster playing another chessmaster would never play a simple trap, as it would never succeed; so would the ML be able to deal with it?