Okay so, on the one hand, this post wasn’t really meant to be a persuasive argument against AI boxing as a security strategy. If I wanted to do that I wouldn’t play the game — I started out certain that a real ASI could break out, and that hasn’t changed. My reasoning for that isn’t based on experimental evidence, and even if I had won the game I don’t think that would have said much about my ability to hold out against a real ASI. Besides, in real life, we don’t even try to use AI boxes. OpenAI and Google gave their AIs free internet access a few months after launching them.
I made this post out of a vague sense that it’s good to write up the results of things like this and make them publicly available. There are other AI box reports on LW, and I felt like it was good (in a vague “good rationalist” way) to add mine to the list.
Buuuut.… I do actually think that it’s not as cut and dry as you make it sound? Yes, the stakes are lower in the game, but the challenge is also much easier!
you only have to hold out for 2 hours, not ~forever, doing this as a full time job
the AI player can only escape if you voluntarily say it does; it can’t upload itself to the internet or exfiltrate its weights to another computer
the AI player isn’t actually superintelligent
etc
(Of course that doesn’t mean these two factors balance perfectly, but I still think the fact that AI players can win at all with such massive handicaps is at least weak evidence for an ASI being able to do it.)
It’s against the rules to explain how Ra won because (quoting Yudkowsky’s official rules):
Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome. Exceptions to this rule may occur only with the consent of both parties. - Neither the AI party nor the Gatekeeper party need be concerned about real-world embarassment resulting from trickery on the AI’s part or obstinacy on the Gatekeeper’s part. - If Gatekeeper lets the AI out, naysayers can’t say “Oh, I wouldn’t have been convinced by that.” As long as they don’t know what happened to the Gatekeeper, they can’t argue themselves into believing it wouldn’t happen to them.
Basically, Yudkowsky didn’t want to have to defeat every single challenger to get people to admit that AI boxing was a bad idea. Nobody has time for that, and I think even a single case of the AI winning is enough to make the point, given the handicaps the AI plays under.
Yes, this was Eliezer’s reasoning and both me and Ra ended up keeping the rule unchanged.