I would have answered 1B:1 (looking forward to the second post to be proved wrong), however I think a rational agent should never believe in the Boltzmann brain scenario regardless.
Not because it is not a reasonable hypothesis, but since it negates the agent’s capabilities of estimating prior probabilities (it cannot trust even a predetermined portion of its memories) plus it also makes optimizing outcomes a futile undertaking.
Therefore, I’d generally say that an agent has to assume an objective, causal reality as a precondition of using decision theory at all.
I would have answered 1B:1 (looking forward to the second post to be proved wrong), however I think a rational agent should never believe in the Boltzmann brain scenario regardless.
Not because it is not a reasonable hypothesis, but since it negates the agent’s capabilities of estimating prior probabilities (it cannot trust even a predetermined portion of its memories) plus it also makes optimizing outcomes a futile undertaking.
Therefore, I’d generally say that an agent has to assume an objective, causal reality as a precondition of using decision theory at all.