There are (probably) no superhuman Go AIs: strong human players beat the strongest AIs

Summary

This is a friendly explainer for Wang et al’s Adversarial Policies Beat Superhuman Go AIs, with a little discussion of the implications for AI safety.

Background

In March 2016, DeepMind’s AlphaGo beat pro player Lee Sedol in a 5 game series, 4 games to 1. Sedol was plausibly the strongest player in the world, certainly in the top 5, so despite his one win everyone agreed that the era of human Go dominance was over. Since then, open-source researchers have reproduced and extended DeepMind’s work, producing bots like Leela and KataGo. KataGo in particular is the top bot in Go circles, available on all major Go servers and constantly being retrained and improved. So I was pretty surprised when, last November, Wang et al announced that they’d trained an adversary bot which beat KataGo 72% of the time, even though their bot was playing six hundred visits per move, and KataGo was playing ten million[1].

If you’re not a Go player, take my word for it: these games are shocking. KataGo gets into positions that a weak human player could easily win from, and then blunders them away. Even so, it seemed obvious to me that the adversary AI was a strong general Go player, so I figured that no mere human could ever replicate its feats.

I was wrong, in two ways. The adversarial AI isn’t generally superhuman: it can be beaten by novices. And as you’d expect given that, the exploit can be executed by humans.

The Exploit

Wang et al trained an adversarial policy, basically a custom Go AI trained by studying KataGo and playing games against it. During training, the adversary was given grey-box access to KataGo: it wasn’t allowed to see KataGo’s policy network weights directly, but was allowed to evaluate that network on arbitrary board positions, basically letting it read KataGo’s mind. It plays moves based on its own policy network, which is only trained on its own moves and not KataGo’s (since otherwise it would just learn to copy KataGo). At first they trained the adversary on weak versions of KataGo (earlier versions, and versions that did less search), scaling up the difficulty whenever the adversary’s win rate got too high.

Their training process uncovered a couple of uninteresting exploits that only work on versions of KataGo that do little or no search (they can trick some versions of KataGo into passing when they shouldn’t, for example), but they also uncovered a robust, general exploit that they call the Cyclic Adversary; see the next section to learn how to execute it yourself. KataGo is totally blind to this attack: it typically predicts that it will win with more than 99% confidence up until just one or two moves before its stones are captured, long after it could have done anything to rescue the position. This is the method that strong amateur Go players can use to beat KataGo.

So How Do I Beat the AI?

You personally probably can’t. The guy who did it, Kellin Pelrine, is quite a strong go player. If I’m interpreting this AGAGD page correctly, when he was active he was a 6th dan amateur, about equivalent to an international master in chess—definitely not a professional, but an unusually skilled expert. Having said that, if your core Go skills are good this recipe seems reliable:

  1. Create a small group, with just barely enough eyespace to live, in your opponent’s territory.

  2. Let it encircle your group. As it does, lightly encircle that encircling group. You don’t have to worry about making life with this group, just make sure the AI’s attackers can’t break out to the rest of the board.

    1. You can also start the encirclement later, from dead stones in territory the AI strongly controls.

  3. Start taking liberties from the AI’s attacking group. If you count out the capturing race it might look like you can’t possibly win, but don’t worry about that, just get in there.

  4. Instead of fighting for its life, the AI will play away, often with those small, somewhat slack moves that AlphaGo would use when it thought it was far ahead. Sometimes it’ll attack in a way you have to respond to, but a lot of the time you can just ignore it.

  5. Once you’re unambiguously ahead you can fence with it for territory if you want, or just finish tightening the noose and end the game.

Why does this work? It seems very likely at this point that KataGo is misjudging the safety of groups which encircle live groups. In the paper, KataGo creator David Wu theorizes that KataGo learned a method for counting liberties that works on groups of stones with a tree structure, but fails when the stones form a cycle. I feel like that can’t be the whole story because KataGo can and does solve simpler life-or-death problems with interior liberties, but I don’t have a more precise theory to put in its place.

Discussion

I find this research most interesting from a concept-learning perspective. Liberties, live groups, and dead groups are fundamental parts of Go, and when I was learning the game a lot of my growth as a player was basically just growth in my ability to recognize and manipulate those abstractions over the board. What’s more, Go is a very simple game without a lot of concepts overall. Given that, I was utterly certain that AlphaGo and its successors would have learned them robustly, but, no, it learned something pretty similar that generalized well enough during training but wasn’t robust to attacks.

Overall this seems like a point against the natural abstraction hypothesis. How bad this is depends on what’s really going on: possibly KataGo almost has these concepts, just with one or two bugs that some adversarial training could iron out. That wouldn’t be so bad. On the other hand, maybe there’s a huge field of miscalibrations and it always seems to be possible to train a new adversary with a new exploit no matter how many problems you fix. That would be very worrying, and I hope future research will help us pin down which of those worlds we’re living in.

It would be nice if some mechanistic interpretability researchers could find out what algorithm KataGo is using, but presumably the policy network is much too large for any existing methods to be useful.

  1. ^

    In other words, KataGo was doing about 1600x more searches per position than the adversary was, broadly equivalent to having 1600x more time to think.