And why exactly is this ‘play randomly for 3 moves then applying material advantage’ gives better utility than just applying material advantage?
In this instance, they won’t differ at all. But if the AI had some preferences outside of the chess board, then the indifferent AI would be open to playing any particular move (for the first three turns) in exchange for some other separate utility gain.
Plus you got yourself some utility function that is entirely ill defined in a screwy self referential way
In fact no. It seems like that, because of the informal language I used, but the utility function is perfectly well defined without any reference to the AI. The only self-reference is the usual one—how do I predict my future actions now.
If you mean that an indifferent utility can make these predictions harder/more necessary in some circumstances, then you are correct—but this seems trivial for a superintelligence.
In this instance, they won’t differ at all. But if the AI had some preferences outside of the chess board, then the indifferent AI would be open to playing any particular move (for the first three turns) in exchange for some other separate utility gain.
In fact no. It seems like that, because of the informal language I used, but the utility function is perfectly well defined without any reference to the AI. The only self-reference is the usual one—how do I predict my future actions now.
If you mean that an indifferent utility can make these predictions harder/more necessary in some circumstances, then you are correct—but this seems trivial for a superintelligence.