The most efficient form of practice is generally to address one’s weaknesses. Why, then, don’t chess/Go players train by playing against engines optimized for this? I can imagine three types of engines:
Trained to play more human-like sound moves (soundness as measured by stronger engines like Stockfish, AlphaZero).
Trained to play less human-like sound moves.
Trained to win against (real or simulated) humans while making unsound moves.
The first tool would simply be an opponent when humans are inconvenient or not available. The second and third tools would highlight weaknesses in one’s game more efficiently than playing against humans or computers. I’m confused about why I can’t find any attempts at engines of type 1 that apply modern deep learning techniques, or any attempts whatsoever at engines of type 2 or 3.
Someone happened to ask a post on Stack Exchange about engines trained to play less human-like sound moves. The question is here, but most of the answerers don’t seem to understand the question.
The most efficient form of practice is generally to address one’s weaknesses. Why, then, don’t chess/Go players train by playing against engines optimized for this? I can imagine three types of engines:
Trained to play more human-like sound moves (soundness as measured by stronger engines like Stockfish, AlphaZero).
Trained to play less human-like sound moves.
Trained to win against (real or simulated) humans while making unsound moves.
The first tool would simply be an opponent when humans are inconvenient or not available. The second and third tools would highlight weaknesses in one’s game more efficiently than playing against humans or computers. I’m confused about why I can’t find any attempts at engines of type 1 that apply modern deep learning techniques, or any attempts whatsoever at engines of type 2 or 3.
Someone happened to ask a post on Stack Exchange about engines trained to play less human-like sound moves. The question is here, but most of the answerers don’t seem to understand the question.