Good question. They implicitly assume a dynamic choice principle and a choice function that leaves the agent non-opportunistic.
Their dynamic choice principle is something like myopia: the agent only looks at their node’s immediate successors and, if a successor is yet another choice node, the agent represents it as some ‘default’ prospect.
Their choice rule is something like this: the agent assigns some natural ‘default’ prospect and deviates from it iff it prefers some other prospect. (So if some prospect is incomparable to the default, it’s never chosen.)
These aren’t the only approaches an agent can employ, and that’s where it fails. It’s wrong to conclude that “non-dominated strategy implies utility maximization” since we know from section 2 that we can achieve non-domination without completeness—by using a different dynamic choice principle and choice function.
That is a fantastic answer, thank you. Do you think that there’s any way your post could be wrong? For instance, “[letting] decision trees being the main model of an agent’s environment”, as per JohnWentworth in a discussion with EJT[1] where he makes a similair critique to your point about their implicit dynamic choice principle?
This is an interesting topic. Regarding the discussion you mention, I think my results might help illustrate Elliott Thornley’s point. John Wentworth wrote:
That makes me think that the small decision trees implicitly contain a lot of assumptions that various trades have zero probability of happening, which is load-bearing for your counterexamples. In a larger world, with a lot more opportunities to trade between various things, I’d expect that sort of issue to be much less relevant.
My results made no assumptions about the size or complexity of the decision trees, so I don’t think this itself is a reason to doubt my conclusion. More generally, if there exists some Bayesian decision tree that faithfully represents an agent’s decision problem, and the agent uses the appropriate decision principles with respect to that tree, then my results apply. The existence of such a representation is not hindered by the number of choices, the number of options, or the subjective probability distributions involved.
I think my results under unawareness (section 3) are particularly likely to be applicable to complex real-world decision problems. The agent can be entirely wrong about their actual decision tree—e.g., falsely assigning probability zero to events that will occur—and yet appropriate opportunism remains and trammelling is bounded. This is because any suboptimal decision by an agent in these kinds of cases is a product of its epistemic state; not its preferences. Whether the agent’s preferences are complete or not, it will make wrong turns in the same class of situations. The globally-DSM choice function will guarantee that the agent couldn’t have done better given its knowledge and values, even if the agent’s model of the world is wrong.
Good question. They implicitly assume a dynamic choice principle and a choice function that leaves the agent non-opportunistic.
Their dynamic choice principle is something like myopia: the agent only looks at their node’s immediate successors and, if a successor is yet another choice node, the agent represents it as some ‘default’ prospect.
Their choice rule is something like this: the agent assigns some natural ‘default’ prospect and deviates from it iff it prefers some other prospect. (So if some prospect is incomparable to the default, it’s never chosen.)
These aren’t the only approaches an agent can employ, and that’s where it fails. It’s wrong to conclude that “non-dominated strategy implies utility maximization” since we know from section 2 that we can achieve non-domination without completeness—by using a different dynamic choice principle and choice function.
That is a fantastic answer, thank you. Do you think that there’s any way your post could be wrong? For instance, “[letting] decision trees being the main model of an agent’s environment”, as per JohnWentworth in a discussion with EJT[1] where he makes a similair critique to your point about their implicit dynamic choice principle?
See the comments section of this post: https://www.lesswrong.com/posts/bzmLC3J8PsknwRZbr/why-not-subagents
Thanks for saying!
This is an interesting topic. Regarding the discussion you mention, I think my results might help illustrate Elliott Thornley’s point. John Wentworth wrote:
My results made no assumptions about the size or complexity of the decision trees, so I don’t think this itself is a reason to doubt my conclusion. More generally, if there exists some Bayesian decision tree that faithfully represents an agent’s decision problem, and the agent uses the appropriate decision principles with respect to that tree, then my results apply. The existence of such a representation is not hindered by the number of choices, the number of options, or the subjective probability distributions involved.
I think my results under unawareness (section 3) are particularly likely to be applicable to complex real-world decision problems. The agent can be entirely wrong about their actual decision tree—e.g., falsely assigning probability zero to events that will occur—and yet appropriate opportunism remains and trammelling is bounded. This is because any suboptimal decision by an agent in these kinds of cases is a product of its epistemic state; not its preferences. Whether the agent’s preferences are complete or not, it will make wrong turns in the same class of situations. The globally-DSM choice function will guarantee that the agent couldn’t have done better given its knowledge and values, even if the agent’s model of the world is wrong.