Allegedly Cho Chikun was asked how many stones he would want from God and said “about four”.
I’m not sure what the corresponding figure would be for chess. (Nor actually what its “units” would be—chess doesn’t have a handicapping system as straightforward as go does, and I wonder whether Elo-like ratings go awry if one player is playing absolutely perfectly.)
I’m not sure what the corresponding figure would be for chess.
You can actually calculate this now. Regan has noted that for computer chess, they’re getting to the point where they are effectively perfect and equivalent; so whatever that gap between them and the best human player ever is can be turned into a piece advantage. (Not that I know how to do this, but I assume anyone already somewhat familiar with ELO and chess engines can take the ELO difference and figure out the corresponding material advantage. Regan thinks it’s probably somewhere ~3600 ELO. Apparently chess AIs can now offer at least “pawn and move, pawn, exchange, and four-move odds.” and still beat US champions & grandmasters like Hikaru Nakamura.)
But maybe that was a little hard to answer, so let me put the question the other way: has there ever been a case where a strategy game played seriously & competitively (ie. not tic-tac-toe or blackjack) by adult humans was solved to perfect or superhuman play levels by AI researchers, and the perfect or superhuman play turned out to be identical or so close to the top human’s play level that human could win regularly?
A game like that could occur between humans and A.I. with online collectible card games. (I’m specifying online because the rules are streamlined and mass competition is far more available.)
Allegedly Cho Chikun was asked how many stones he would want from God and said “about four”.
I’m not sure what the corresponding figure would be for chess. (Nor actually what its “units” would be—chess doesn’t have a handicapping system as straightforward as go does, and I wonder whether Elo-like ratings go awry if one player is playing absolutely perfectly.)
You can actually calculate this now. Regan has noted that for computer chess, they’re getting to the point where they are effectively perfect and equivalent; so whatever that gap between them and the best human player ever is can be turned into a piece advantage. (Not that I know how to do this, but I assume anyone already somewhat familiar with ELO and chess engines can take the ELO difference and figure out the corresponding material advantage. Regan thinks it’s probably somewhere ~3600 ELO. Apparently chess AIs can now offer at least “pawn and move, pawn, exchange, and four-move odds.” and still beat US champions & grandmasters like Hikaru Nakamura.)
But maybe that was a little hard to answer, so let me put the question the other way: has there ever been a case where a strategy game played seriously & competitively (ie. not tic-tac-toe or blackjack) by adult humans was solved to perfect or superhuman play levels by AI researchers, and the perfect or superhuman play turned out to be identical or so close to the top human’s play level that human could win regularly?
A game like that could occur between humans and A.I. with online collectible card games. (I’m specifying online because the rules are streamlined and mass competition is far more available.)
I also don’t know of any.