Here is an attempt at an updateless answer, because the problem is too confusing for me from an individual perspective. I’m not sure in how far this contradicts my earlier answer.
Assume multiverse branches A and B with equal mesure/prior probability. A has 99 times as many instances of the agent as B. If the agent weights the consequences of the actions of each instance equally the instances of the agent will in most cases behave like individualist single world agents believing “A” to be true with 0.99 confidence. Most human level problems probably are of that type. There may be problems where the majority answer in A and B is equally important, or some other weighting where the answer in A is not 99 times as important as the one in B. In those cases the agent won’t behave like agents believing in “A” with 0.99 confidence.
Here is an attempt at an updateless answer, because the problem is too confusing for me from an individual perspective. I’m not sure in how far this contradicts my earlier answer.
Assume multiverse branches A and B with equal mesure/prior probability. A has 99 times as many instances of the agent as B. If the agent weights the consequences of the actions of each instance equally the instances of the agent will in most cases behave like individualist single world agents believing “A” to be true with 0.99 confidence. Most human level problems probably are of that type. There may be problems where the majority answer in A and B is equally important, or some other weighting where the answer in A is not 99 times as important as the one in B. In those cases the agent won’t behave like agents believing in “A” with 0.99 confidence.