WRT #9, a Bayesian might want to believe X because they are in a weird decision theory problem where beliefs make things come true. This seems relatively common for humans unless they can hide their reactions well.
The issue of wanting X to happen does seem rather subtle, especially since there isn’t a clean division between things you want to know about and things you might want to influence. The solution of this paradox in classical decision theory is that the agent should already know its own plans, so its beliefs already perfectly reflect any influence which it has on X. Of course, this comes from an assumption of logical omniscience. Bounded agents with logical uncertainty can’t reason like that.
WRT #9, a Bayesian might want to believe X because they are in a weird decision theory problem where beliefs make things come true. This seems relatively common for humans unless they can hide their reactions well.
The issue of wanting X to happen does seem rather subtle, especially since there isn’t a clean division between things you want to know about and things you might want to influence. The solution of this paradox in classical decision theory is that the agent should already know its own plans, so its beliefs already perfectly reflect any influence which it has on X. Of course, this comes from an assumption of logical omniscience. Bounded agents with logical uncertainty can’t reason like that.