Expected utility is not something that “goes up”, as the AI develops. It’s utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it’ll be.
Hmm, I don’t see how it applies either, at least under default assumptions—as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase
This utility is monotonic in time, that is, it never decreases, and is bounded from above.
which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...
No worries. I’d still be curious to hear your thoughts, as I haven’t received any responses that help me understand how this utility function might fail. Should I expand on the original post?
Expected utility is not something that “goes up”, as the AI develops. It’s utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it’ll be.
Can you elaborate? I understand what you wrote (I think) but don’t see how it applies.
Hmm, I don’t see how it applies either, at least under default assumptions—as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase
which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...
No worries. I’d still be curious to hear your thoughts, as I haven’t received any responses that help me understand how this utility function might fail. Should I expand on the original post?