Now, there might be ways to design systems to avoid this problem, eg fixing certain beliefs and values such that they don’t update on evidence
Because of the is-ought gap, value (how one wants the world to be) doesn’t inherently change in response to evidence/beliefs (how the world is).[1]
So a hypothetical competent AI designer[2] doesn’t have to go out of their way to make the value not update on evidence. Nor to make any beliefs not update on evidence.
(If an AI is more like a human then [what it acts like it values] could change in response to evidence though yea. I think most of the historical alignment theory texts aren’t about aligning human-like AIs (but rather hypothetical competently designed ones).)
I’ve had someone keep disagreeing with this once, so I’ll add that a value is not a statement about the world, so how would the Bayes equation update it?
a hypothetical competently designed AI could separately have a belief about “What I value”, or more specifically, about “the world contains something here running code for the decision process that is me, so its behavior correlates with my decision”, but regardless of how that belief gets manipulated by the hypothetical evidence-presenting demon (maybe it’s manipulated into “with high probability, the thing runs code that values y instead, and its actions don’t correlate with my decision”), the next step in the AI goes: “given all these beliefs, what output of the-decision-process-that-is-me best fulfills <hardcoded value function>”.
(if it believes there is nothing in the world whose behavior correlates with the decision, all decisions would do nothing and score equally in such case. it’d default to acting under world-possibilities which it assigns lower probability but where it has machines to control).
(one might ask) okay but could the hypothetical demon manipulate its platonic beliefs about what “the decision process that is me” is? well, maybe not, because that’s (as separate from the above) also not the kind of thing that inherently updates on evidence about a world.
but if it were manipulated, somehow—im not quite sure what to even imagine being manipulated, maybe parts of the process rely on ‘expectations’ about other parts so it’s those expectations (though only if they’re not hardcoded in? so some sort of AI designed to discover some parts of ‘what it is’ by observing its own behavior?) - there’d still be code at some point saying to [score considered decisions on how much they fulfill <hardcoded value function>, and output the highest scoring one]. it’s just parts of the process could be confused(?)/hijacked, in this hypothetical.
Because of the is-ought gap, value (how one wants the world to be) doesn’t inherently change in response to evidence/beliefs (how the world is).[1]
So a hypothetical competent AI designer[2] doesn’t have to go out of their way to make the value not update on evidence. Nor to make any beliefs not update on evidence.
(If an AI is more like a human then [what it acts like it values] could change in response to evidence though yea. I think most of the historical alignment theory texts aren’t about aligning human-like AIs (but rather hypothetical competently designed ones).)
I’ve had someone keep disagreeing with this once, so I’ll add that a value is not a statement about the world, so how would the Bayes equation update it?
a hypothetical competently designed AI could separately have a belief about “What I value”, or more specifically, about “the world contains something here running code for the decision process that is me, so its behavior correlates with my decision”, but regardless of how that belief gets manipulated by the hypothetical evidence-presenting demon (maybe it’s manipulated into “with high probability, the thing runs code that values y instead, and its actions don’t correlate with my decision”), the next step in the AI goes: “given all these beliefs, what output of the-decision-process-that-is-me best fulfills <hardcoded value function>”.
(if it believes there is nothing in the world whose behavior correlates with the decision, all decisions would do nothing and score equally in such case. it’d default to acting under world-possibilities which it assigns lower probability but where it has machines to control).
(one might ask) okay but could the hypothetical demon manipulate its platonic beliefs about what “the decision process that is me” is? well, maybe not, because that’s (as separate from the above) also not the kind of thing that inherently updates on evidence about a world.
but if it were manipulated, somehow—im not quite sure what to even imagine being manipulated, maybe parts of the process rely on ‘expectations’ about other parts so it’s those expectations (though only if they’re not hardcoded in? so some sort of AI designed to discover some parts of ‘what it is’ by observing its own behavior?) - there’d still be code at some point saying to [score considered decisions on how much they fulfill <hardcoded value function>, and output the highest scoring one]. it’s just parts of the process could be confused(?)/hijacked, in this hypothetical.
(not grower)