Coherence is not about whether a system “can be well-modeled as a utility maximizer” for some utility function over anything at all, it’s about whether a system can be well-modeled as a utility maximizer for utility over some specific stuff.
The utility in the toy coherence theorem in this post is very explicitly over final states, and the theorem says nontrivial things mainly when the agent is making decisions at earlier times in order to influence that final state—i.e. the agent is optimizing the state “far away” (in time) from its current decision. That’s the prototypical picture in my head when I think of coherence. Insofar as an incoherent system can be well-modeled as a utility maximizer, its optimization efforts must be dominated by relatively short-range, myopic objectives. Coherence arguments kick in when optimization for long-range objectives dominates.
My understanding based on this is that your definition of “reasonable” as per my post is “non-myopic” or “concerned with some future world state”?
In my head I usually think of it as non-myopic in spacetime (as opposed to just time), but the version which is (somewhat) justified by the Toy Coherence Theorem is non-myopia over time.
My understanding based on this is that your definition of “reasonable” as per my post is “non-myopic” or “concerned with some future world state”?
Yes.
In my head I usually think of it as non-myopic in spacetime (as opposed to just time), but the version which is (somewhat) justified by the Toy Coherence Theorem is non-myopia over time.
Is there a good definition of non-myopic in spacetime?
Optimization at a Distance has the mental picture in it, which… is not not a definition.