I think it’s unambiguous that mapping perceived/expected state of the universe to a value for instantaneous decision-making is a useful (and perhaps necessary) abstraction to model anything about decision-making. You don’t seem to be arguing about that, but only claiming that a consistent utility function over time is … unnecessary, incomplete, or incorrect (unsure which).
You also seem to be questioning the idea that more capable/effective agents have more consistent utility functions. You reject the dutch book argument, which is one of the intuitive supports for this belief, but I don’t think you’ve proposed any case where inconsistency does optimize the universe better than consistent decisions. Inconsistency opens the identity problem (are they really the same agent if they have different desires of a future world-state?), but even if you handwave that, it’s clear that making inconsistent decisions optimizes the universe less than making consistent ones.
I think I’m with you that it may be impossible to have a fully-consistent long-lived agent. The universe has some irreducible complexity that any portion of the universe can’t summarize well enough to evaluate a hypothetical. But I’m not with you if you say that an agent can be equally or more effective if it changes it’s goals all the time.
All models are wrong, some models are useful.
I think it’s unambiguous that mapping perceived/expected state of the universe to a value for instantaneous decision-making is a useful (and perhaps necessary) abstraction to model anything about decision-making. You don’t seem to be arguing about that, but only claiming that a consistent utility function over time is … unnecessary, incomplete, or incorrect (unsure which).
You also seem to be questioning the idea that more capable/effective agents have more consistent utility functions. You reject the dutch book argument, which is one of the intuitive supports for this belief, but I don’t think you’ve proposed any case where inconsistency does optimize the universe better than consistent decisions. Inconsistency opens the identity problem (are they really the same agent if they have different desires of a future world-state?), but even if you handwave that, it’s clear that making inconsistent decisions optimizes the universe less than making consistent ones.
I think I’m with you that it may be impossible to have a fully-consistent long-lived agent. The universe has some irreducible complexity that any portion of the universe can’t summarize well enough to evaluate a hypothetical. But I’m not with you if you say that an agent can be equally or more effective if it changes it’s goals all the time.