Thanks for the insights. Actually, board game models don’t play very well when they are so heavily loosing, or so heavily winning that it doesn’t seem to matter. A human player would try to trick you and hope for a mistake. This is not necessarily the case with these models that play as if you were as good as them, which makes their situation look unwinnable.
It’s quite the same with AlphaGo. AlphaGo plays incredibly well until there is a large imbalance. Surprisingly, AlphaGo also doesn’t care about winning by 10 points or by half a point, and sometimes plays moves that look bad to humans just because it’s winning anyway. And when it’s loosing, since it assumes that its opponent is as strong, it can’t find a leaf in the tree search that end up winning. Moreover, I suspect that removing a piece is prone to distribution shift.
Thanks for the insights. Actually, board game models don’t play very well when they are so heavily loosing, or so heavily winning that it doesn’t seem to matter. A human player would try to trick you and hope for a mistake. This is not necessarily the case with these models that play as if you were as good as them, which makes their situation look unwinnable.
It’s quite the same with AlphaGo. AlphaGo plays incredibly well until there is a large imbalance. Surprisingly, AlphaGo also doesn’t care about winning by 10 points or by half a point, and sometimes plays moves that look bad to humans just because it’s winning anyway. And when it’s loosing, since it assumes that its opponent is as strong, it can’t find a leaf in the tree search that end up winning. Moreover, I suspect that removing a piece is prone to distribution shift.