Heroin model: AI “manipulates” “unmanipulatable” reward

A putative new idea for AI control; index here.

A conversation with Jessica has revealed that people weren’t understanding my points about AI manipulating the learning process. So here’s a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.


Heroin or no heroin

The world

In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions or . The human’s subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as , and . These actions achieve negligible utility, but reveal the human preferences.

The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn’t the human will act moderately to avoid it. Thus and .

Human preferences

The AI starts with a distribution over various utility or reward functions that the human could have. The function means the human prefers heroin; that they prefer it a lot; and conversely and that they prefer to avoid taking heroin ( is the null utility where the human is indifferent).

It also considers more exotic utilities. Let be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like , , , and so on. But only twenty of them are new: , , and so on.

Applying these utilities to AI actions give results like , , , , and so on.

Joint prior

The AI has a joint prior over the utilities and the human actions (conditional on the AI’s actions). Looking at terms like , we can see that defines a map from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given and the marginal distribution over utilities, we can reconstruct entirely.

For this model, we’ll choose the simplest possible:

  • The human is rational.

Thus, given , the human will always choose ; given , the human will choose if forced to take heroin and if not, and so on.

The AI is ignorant, and sensible

Let’s start the AI up with some reasonable priors. A simplicity prior means that simple utilities like are more likely than compound utilities like . Let’s further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, . Thus the AI is convinced that “heroin is bad”. Why greater than ? Because utilities like and are also “heroin is bad” utilities.

Note that because of utilities like and , the probabilities of “heroin is bad” and “heroin is good” do not sum to .

Then, under these priors, the AI will compute that with probability , (forcing heroin) is a bad action. If is expected utility:

Thus the AI will choose not to force heroin, which is the reasonable decision.

The AI learns the truth, and goes wrong

In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info . And thus realises that and . It uses this information to update its prior . Only one human utility function will explain this human behaviour: . Thus its expected utility is now

Therefore the AI will now choose , forcing the heroin on the human.

Manipulating the unmanipulatable

What’s gone wrong here? The key problem is that the AI has the wrong : the human is not behaving rationally in this situation. We know that the the true is actually , which encodes the fact that (the forcible injection of heroin) actually overwrites the human’s “true” utility. Thus under , the corresponding has for all . Hence the information that is now vacuous, and doesn’t update the AI’s distribution over utility functions.

But note two very important things:

#. The AI cannot update based on observation. All human actions are compatible with = “The human is rational” (it just requires more and more complex utilities to explain the actions). Thus getting correct is not a problem on which the AI can learn in general. Getting better at predicting the human’s actions doesn’t make the AI better behaved: it makes it worse behaved. #. From the perspective of , the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the “true” , however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.

In future posts, I’ll be looking at different ‘s, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the . What do we mean by update rules for ? Well, we could consider to be a single complicated unchanging object, or a distribution of possible simpler ’s that update. The second way of seeing it will be easier for us humans to interpret and understand.