Suppose in 2000 you use “superhuman Othello from self-play” as a benchmark of a certain kind of impressive AI progress, and forecast it to be possible by 2020. It seems you were correct—very plausibly the AlphaZero architecture should work for this. However, in a strict sense your forecast was wrong—because no one has actually bothered to build a powerful Othello agent.
This might be a bad example? Quoting the Wikipedia: “There are many Othello programs… that can be downloaded from the Internet for free. These programs, when run on any up-to-date computer, can play games in which the best human players are easily defeated.” Well, arguably they are not “from self-play” because they use hand-craft evaluation functions. But, “no one has actually bothered to build a powerful Othello agent” seems just plain wrong.
Thanks for pointing that out. I was aware of such superhuman programs, but the last sentence failed to make the self-play condition sufficiently clear. Have updated it now to reflect this.
This might be a bad example? Quoting the Wikipedia: “There are many Othello programs… that can be downloaded from the Internet for free. These programs, when run on any up-to-date computer, can play games in which the best human players are easily defeated.” Well, arguably they are not “from self-play” because they use hand-craft evaluation functions. But, “no one has actually bothered to build a powerful Othello agent” seems just plain wrong.
Thanks for pointing that out. I was aware of such superhuman programs, but the last sentence failed to make the self-play condition sufficiently clear. Have updated it now to reflect this.