A most peculiar utility function

In a previous post, I glibly suggested a utility function that would allow a -maximising agent to print out the expectation of the utility it was maximising.

But turns out to be very peculiar indeed.

For this post, define

  • ,

for some that is a future output of the AI at time (assume that won’t be visible or known to anyone except the AI). Assume only takes non-negative values).

Peculiar subjectivity

I’d always thought that utility functions were clear on the difference between objectivity and subjectivity—that probabilities of probabilities or such didn’t make sense, and that we couldn’t eg ask the AI to maximise (though we could ask it to maximise , no problem).

But blurs this. It seem a perfectly respectable utility function—the fact that one component is user-defined shouldn’t change this. What will a -maximiser do?

Well, first of all, at time , it will pick , and, afterwards, maximise . This fill give it utility .

Here , but it turns out that there are versions that work with any differentiable convex function - , , , , , …

Thus, maximising , before , involves maximising . Note that this is distinct from maximising either or .

Consider the following three options the AI can take:

A) . B) with probability, with probability, the AI will not know which happens before . C) with probability, with probability, the AI will know which happens before .

Then a -maximiser will choose , while a -maximiser will choose . But a maximiser will choose .

Note that since is convex, the AI will always benefit from finding out more information about (and will never suffer from it, in expectation).

And note that this happens without there being any explicit definition of in the utility function.

Peculiar corrigibility

The AI will shift smoothly from a maximiser before time to a simple -maximiser after time , making this a very peculiar form of corrigibility.

No comments.