(1) I’d consider normalizing the performance data for the random cases against another chess program with similar performance under normal conditions. It may be that introducing 20 random moves to the start of a game biases all players towards a 50⁄50 win outcome. So the sub-50 performance may not reflect a failure of flipping the “don’t suck” switch, but simply good performance in a more average outcome scenario. It’d be interesting to see if Chess-GPT’s relative performance against other chess programs in the random scenario was better than its relative performance in the normal case.
(2) The ‘fuzziness’ of the board positions you found when removing the pawn makes complete sense given one of the nuanced findings in Hazineh, et al Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT (2023) - specifically the finding that it was encoding representations for board configuration and not just pieces (in that case three stones in a row). It may be that piecemeal removal of a piece disrupted patterns of how games normally flow which it had learned, and as such there was greater uncertainty than the original board state. A similar issue may be at hand with the random 20 moves to start, and I’d be curious what the confidence of the board state was when starting off 20 random moves in and if that confidence stabilized as the game went on from there.
Overall really cool update!
And bigger picture, the prospects of essentially flipping an internalized skill vector for larger models to bias them back away from their regression to the mean is particularly exciting.
This is 0.01 seconds per move. It appears that less search time lowers the Elo difference for level 15 vs level 9. A 65% win rate corresponds to a ~100 Elo difference, while a 81% win rate corresponds to a 250-300 Elo difference.
Honestly not too sure what to make of the results. One possible variable is that in every case, the higher level player is White. Starting in a game with a random position may favor the first to move. Level 2 vs level 0 seems most applicable to the Chess-GPT setting.
Interesting results—definitely didn’t expect the bump at random 20 for the higher skill case.
But I think really useful to know that the performance decrease in Chess-GPT for initial random noise isn’t a generalized phenomenon. Appreciate the follow-up!!
Saw your update on GitHub: https://adamkarvonen.github.io/machine_learning/2024/03/20/chess-gpt-interventions.html
Awesome you expanded on the introspection.
Two thoughts regarding the new work:
(1) I’d consider normalizing the performance data for the random cases against another chess program with similar performance under normal conditions. It may be that introducing 20 random moves to the start of a game biases all players towards a 50⁄50 win outcome. So the sub-50 performance may not reflect a failure of flipping the “don’t suck” switch, but simply good performance in a more average outcome scenario. It’d be interesting to see if Chess-GPT’s relative performance against other chess programs in the random scenario was better than its relative performance in the normal case.
(2) The ‘fuzziness’ of the board positions you found when removing the pawn makes complete sense given one of the nuanced findings in Hazineh, et al Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT (2023) - specifically the finding that it was encoding representations for board configuration and not just pieces (in that case three stones in a row). It may be that piecemeal removal of a piece disrupted patterns of how games normally flow which it had learned, and as such there was greater uncertainty than the original board state. A similar issue may be at hand with the random 20 moves to start, and I’d be curious what the confidence of the board state was when starting off 20 random moves in and if that confidence stabilized as the game went on from there.
Overall really cool update!
And bigger picture, the prospects of essentially flipping an internalized skill vector for larger models to bias them back away from their regression to the mean is particularly exciting.
Both are great points, especially #1. I’ll run some experiments and report back.
I had the following results:
Stockfish level 2 vs Stockfish level 0, 0.01 seconds per move, 5k games:
0 random moves: win rate 81.2%
20 random moves: win rate 81.2%
40 random moves: 77.9%
95% confidence interval is about +- 1%
Stockfish level 15 vs level 9, 0.01 seconds per move, 5k games:
0 random moves: 65.5%
20 random moves: 72.8%
40 random moves: 67.5%
Once again, 95% confidence interval is about +- 1%
At 120 seconds per move, both of these level differences correspond to ~300 Elo: https://github.com/official-stockfish/Stockfish/commit/a08b8d4
This is 0.01 seconds per move. It appears that less search time lowers the Elo difference for level 15 vs level 9. A 65% win rate corresponds to a ~100 Elo difference, while a 81% win rate corresponds to a 250-300 Elo difference.
Honestly not too sure what to make of the results. One possible variable is that in every case, the higher level player is White. Starting in a game with a random position may favor the first to move. Level 2 vs level 0 seems most applicable to the Chess-GPT setting.
Interesting results—definitely didn’t expect the bump at random 20 for the higher skill case.
But I think really useful to know that the performance decrease in Chess-GPT for initial random noise isn’t a generalized phenomenon. Appreciate the follow-up!!