Not quite true: state of knowledge corresponds to beliefs. It’s values that don’t update (but in expected utility maximization that’s both utility and prior). Again, it’s misleading to equate beliefs with prior and forget about the knowledge (event that conditions the current state).
Yes, I agree we can interpret UDT as having its own dichotomy between beliefs and values, but the dividing line looks very different from how humans divide between beliefs and values, which seems closer to the probability/utility divide.
UDT is invariant with respect to what universe it’s actually in. This requires it to compute over infinite universes and thus have infinite computing power. It’s not hard to see why it’s going to break down as a model of in-universe, limted beings.
Not quite true: state of knowledge corresponds to beliefs. It’s values that don’t update (but in expected utility maximization that’s both utility and prior). Again, it’s misleading to equate beliefs with prior and forget about the knowledge (event that conditions the current state).
Yes, I agree we can interpret UDT as having its own dichotomy between beliefs and values, but the dividing line looks very different from how humans divide between beliefs and values, which seems closer to the probability/utility divide.
UDT is invariant with respect to what universe it’s actually in. This requires it to compute over infinite universes and thus have infinite computing power. It’s not hard to see why it’s going to break down as a model of in-universe, limted beings.
What do you mean? It has a utility function just like most other decision theories do. The preferences are represented by the utility function.