The Friendly AI Game

At the recent London meet-up someone (I’m afraid I can’t remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn’t give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a “fun game” out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.

Here’s the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.

There are three reasons I suggest playing this game. In descending order of importance, they are:

  1. It sounds like fun

  2. It might help to convince people that the Friendly AI problem is hard(*).

  3. We might actually come up with something that’s better than anything anyone’s thought of before, or something where the proof of Friendliness is within grasp—the solutions to difficult mathematical problems often look obvious in hindsight, and it surely can’t hurt to try

DISCLAIMER (probably unnecessary, given the audience) - I think it is unlikely that anyone will manage to come up with a formally stated utility function for which none of us can figure out a way in which it could go hideously wrong. However, if they do so, this does NOT constitute a proof of Friendliness and I 100% do not endorse any attempt to implement an AI with said utility function.
(*) I’m slightly worried that it might have the opposite effect, as people build more and more complicated conjunctions of desires to overcome the objections that we’ve already seen, and start to think the problem comes down to nothing more than writing a long list of special cases but, on balance, I think that’s likely to have less of an effect than just seeing how naive suggestions for Friendliness can be hideously broken.