Two Types of Updatelessness

Cross-posted.

Just a small note which I’m not sure has been mentioned anywhere else:

It seems like there are two different classes of “updateless reasoning”.

In problems like Agent Simulates Predictor, switching to updateless reasoning is better for you in the very situation you find yourself in. The gains accrue to you. You objectively achieve higher expected value, at the point of decision, by making the decision from the perspective of yourself long ago rather than doing what seems higher EV from the current perspective.

In problems like counterfactual mugging, the gains do not accrue to the agent at the point of making the decision. The increase in expected value goes to other possible selves, which the decision-point self does not even believe in any more. The claim of higher EV is quite subjective; it depends entirely on one’s prior.

For lack of better terms, I’ll call the first type all-upside updatelessness; the second type is mixed-upside.

It is quite possible to construct decision theories which get all-upside updateless reasoning without getting mixed-upside. Asymptotic decision theory was one.

On the other hand, it seems unlikely that any natural proposal would get mixed-upside without getting the all-upside cases. Policy selection, for example, automatically gets both types (to the limited extent that it enables updatelessness reasoning).

Nonetheless, I find it plausible that one wants two different mechanisms to get the two different kinds. It seems to me that one can handle all-upside cases in a more objective way, getting good overall guarantees. Mixed-upside cases, on the other hand, require more messiness and compromise, as in the policy selection proposal. So, it could be beneficial to combine a mechanism which does perfectly for all-upside cases with a mechanism that provides some weaker guarantee for mixed-upside.