One motivation for UDT is that updating makes an agent stop caring about updated-away possibilities, while UDT is not doing that.
I think there’s an ambiguity here. UDT makes the agent stop considering updated-away possibilities, but I haven’t seen any discussion of UDT which suggests that it stops caring about them in principle (except for a brief suggestion from Paul that one option for UDT is to “go back to a position where I’m mostly ignorant about the content of my values”). Rather, when I’ve seen UDT discussed, it focuses on updating or un-updating your epistemic state.
I don’t think the shift I’m proposing is particularly important, but I do think the idea that “you have your prior and your utility function from the very beginning” is a kinda misleading frame to be in, so I’m trying to nudge a little away from that.
UDT makes the agent stop considering updated-away possibilities, but I haven’t seen any discussion of UDT which suggests that it stops caring about them in principle
UDT specifically enables agents to consider the updated-away possibilities in a way relevant to decision making, while an updated agent (that’s not using something UDT-like) wouldn’t be able to do that in any circumstance, and so would be functionally indistinguishable from an agent that has different preferences or undefined preferences for those possibilities. Not caring about them seems like an apt informal description (even as this is compatible with keeping the same utility function outside the event of current knowledge). In a similar way, we could say that after updating, an agent either changes their probability distribution or keeps the original prior.
I do think the idea that “you have your prior and your utility function from the very beginning” is a kinda misleading frame to be in
Historically it was overwhelmingly the frame until recently, so it’s the correct frame for interpreting the intended meaning of texts from that time. This is a simplifying assumption that still leaves many open questions about how to make decisions in sufficiently strange situations (where merely models of behavior make these strange situationsubiquitous in practice). When an agent doesn’t know its own preference and needs to do something about that, it’s an additional complication that usually wasn’t introduced.
UDT specifically enables agents to consider the updated-away possibilities in a way relevant to decision making, while an updated agent (that’s not using something UDT-like) wouldn’t be able to do that in any circumstance
Agreed; apologies for the sloppy phrasing.
Historically it was overwhelmingly the frame until recently, so it’s the correct frame for interpreting the intended meaning of texts from that time.
I agree, that’s why I’m trying to outline an alternative frame for thinking about it.
I think there’s an ambiguity here. UDT makes the agent stop considering updated-away possibilities, but I haven’t seen any discussion of UDT which suggests that it stops caring about them in principle (except for a brief suggestion from Paul that one option for UDT is to “go back to a position where I’m mostly ignorant about the content of my values”). Rather, when I’ve seen UDT discussed, it focuses on updating or un-updating your epistemic state.
I don’t think the shift I’m proposing is particularly important, but I do think the idea that “you have your prior and your utility function from the very beginning” is a kinda misleading frame to be in, so I’m trying to nudge a little away from that.
UDT specifically enables agents to consider the updated-away possibilities in a way relevant to decision making, while an updated agent (that’s not using something UDT-like) wouldn’t be able to do that in any circumstance, and so would be functionally indistinguishable from an agent that has different preferences or undefined preferences for those possibilities. Not caring about them seems like an apt informal description (even as this is compatible with keeping the same utility function outside the event of current knowledge). In a similar way, we could say that after updating, an agent either changes their probability distribution or keeps the original prior.
Historically it was overwhelmingly the frame until recently, so it’s the correct frame for interpreting the intended meaning of texts from that time. This is a simplifying assumption that still leaves many open questions about how to make decisions in sufficiently strange situations (where merely models of behavior make these strange situations ubiquitous in practice). When an agent doesn’t know its own preference and needs to do something about that, it’s an additional complication that usually wasn’t introduced.
Agreed; apologies for the sloppy phrasing.
I agree, that’s why I’m trying to outline an alternative frame for thinking about it.