Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)
Just to confirm, this means that the thing I put in quotes would probably end up being dynamically inconsistent? In order to avoid that, I need to put in an additional step of also ruling out plans that would be dominated from some constant prior perspective? (It’s a good point that these won’t be dominated from my current perspective.)
That’s right.
(Not sure you’re claiming otherwise, but FWIW, I think this is fine — it’s true that there’s some computational cost to this step, but in this context we’re talking about the normative standard rather than what’s most pragmatic for bounded agents. And once we start talking about pragmatic challenges for bounded agents, I’d be pretty dubious that, e.g., “pick a very coarse-grained ‘best guess’ prior and very coarse-grained way of approximating Bayesian updating, and try to optimize given that” would be best according to the kinds of normative standards that favor indeterminate beliefs.)