Note that in 1 if you want to avoid the “lackluster doing” outcome you have to genuinely be willing to not do / take pessimism effectively in to account when you do the group discussion: It seems to be a very distinct skill which is not very obvious.
In 9 it’s kinda weird that a bayesian wants to increase a probability of a proposition. Someone that takes conservation of expected evidence into hearth would know that a too high number would be counterproductive hubris. I guess it could mean “I want to make X happen” vs “I want to believe X will happen”. I get how the reasoning works on the belief side but effecting the world side I am unsure the logic even applies.
WRT #9, a Bayesian might want to believe X because they are in a weird decision theory problem where beliefs make things come true. This seems relatively common for humans unless they can hide their reactions well.
The issue of wanting X to happen does seem rather subtle, especially since there isn’t a clean division between things you want to know about and things you might want to influence. The solution of this paradox in classical decision theory is that the agent should already know its own plans, so its beliefs already perfectly reflect any influence which it has on X. Of course, this comes from an assumption of logical omniscience. Bounded agents with logical uncertainty can’t reason like that.
Note that in 1 if you want to avoid the “lackluster doing” outcome you have to genuinely be willing to not do / take pessimism effectively in to account when you do the group discussion: It seems to be a very distinct skill which is not very obvious.
In 9 it’s kinda weird that a bayesian wants to increase a probability of a proposition. Someone that takes conservation of expected evidence into hearth would know that a too high number would be counterproductive hubris. I guess it could mean “I want to make X happen” vs “I want to believe X will happen”. I get how the reasoning works on the belief side but effecting the world side I am unsure the logic even applies.
WRT #9, a Bayesian might want to believe X because they are in a weird decision theory problem where beliefs make things come true. This seems relatively common for humans unless they can hide their reactions well.
The issue of wanting X to happen does seem rather subtle, especially since there isn’t a clean division between things you want to know about and things you might want to influence. The solution of this paradox in classical decision theory is that the agent should already know its own plans, so its beliefs already perfectly reflect any influence which it has on X. Of course, this comes from an assumption of logical omniscience. Bounded agents with logical uncertainty can’t reason like that.