Inferring Values from Imperfect Optimizers

One approach to constructing a Friendly artificial intelligence is to create a piece of software that looks at large amounts of evidence about humans, and attempts to infer their values. I’ve been doing some thinking about this problem, and I’m going to talk about some approaches and problems that have occurred to me.

In a naive approach, we might define the problem like this: take some unknown utility function, U, and plug it into a mathematically clean optimization process (like AIXI) O. Then, look at your data set and take the information about the inputs and outputs of humans, and find the simplest U that best explains human behavior.

Unfortunately, this won’t work. The best possible match for U is one that models not just those elements of human utility we’re interested in, but also all the details of our broken, contradictory optimization process. The U we derive through this process will optimize for confirmation bias, scope insensitivity, hindsight bias, the halo effect, our own limited intelligence and inefficient use of evidence, and just about everything else that’s wrong with us. Not what we’re looking for.

Okay, so let’s try putting a bandaid on it—let’s go back to our original problem setup. However, we’ll take our original O, and use all of the science on cognitive biases at our disposal to handicap it. We’ll limit its search space, saddle it with a laundry list of cognitive biases, cripple its ability to use evidence, and in general make it as human-like as we possibly can. We could even give it akrasia by implementing hyperbolic discounting of reward. Then we’ll repeat the original process to produce U’.

If we plug U’ into our AI, the result will be that it will optimize like a human who had suddenly been stripped of all the kinds of stupidity that we programmed into our modified O. This is good! Plugged into a solid CEV infrastructure, this might even be good enough to produce a future that’s a nice place to live. However, it’s not quite ideal. If we miss a cognitive bias, then it’ll be incorporated into the learned utility functions, and we may never be rid of it. What would be nice would be if we could get the AI to learn about cognitive biases, exhaustively, and update in the future if it ever discovered a new one.

If we had enough time and money, we could do this the hard way: acquire a representative sample of the human population, and pay them to perform tasks with simple goals under tremendous surveillance, and have the AI derive the human optimization process from the actions taken towards a known goal. However, if we assume that the human optimization process can be defined as a function over the state of the human brain, we should not trust the completeness of any such process learned from less data than the entropy of the human brain, which is on the order of tens of petabytes of extremely high quality evidence. If we want to be confident in the completeness of our model, we may need more experimental evidence than it is really practical to accumulate. Which isn’t to say that this approach is useless—if we can hit close enough to the mark, then the AI may be able to run more exhaustive experimentation later and refine its own understanding of human brains to be closer to the ideal.

But it’d really be nice if our AI could do unsupervised learning to figure out the details of human optimization. Then we could simply dump the internet into it, and let it grind away at the data and spit out a detailed, complete model of human decision-making, from which our utility function could be derived. Unfortunately, this does not seem to be a tractable problem. It’s possible that some insight could be gleaned by examining outliers with normal intelligence, but deviant utility functions (I am thinking specifically of sociopaths), but it’s unclear how much insight can be produced by these methods. If anyone has suggestions for a more efficient way of going about it, I’d love to hear it. As it stands, it might be possible to get enough information from this to supplement a supervised learning approach—the closer we get to a perfectly accurate model, the higher the probability of Things Going Well.

Anyways, that’s where I am right now. I just thought I’d put up my thoughts and see if some fresh eyes see anything I’ve been missing.

Cheers,

Niger