Do you mean what UDT would do in the example I gave? A UDT agent would have a preference about how it wants the multiverse as a whole to turn out. Does it prefer:
A. In approximately half of possible universes/branches, 100 copies of itself gain $100. In the other half, 1 copy of itself loses $100. Or,
B. Nobody gains or loses any money.
Betting implies A, and not betting implies B, so if it prefers A to B, then it chooses to bet, otherwise it doesn’t. (For simplicity, this analysis ignores more complex strategies such as only betting for some fraction of possible R’s.)
I agree that this example proves that the naive approach doesn’t work in general. Thank you for providing it.
What would UDT do?
Do you mean what UDT would do in the example I gave? A UDT agent would have a preference about how it wants the multiverse as a whole to turn out. Does it prefer:
A. In approximately half of possible universes/branches, 100 copies of itself gain $100. In the other half, 1 copy of itself loses $100. Or, B. Nobody gains or loses any money.
Betting implies A, and not betting implies B, so if it prefers A to B, then it chooses to bet, otherwise it doesn’t. (For simplicity, this analysis ignores more complex strategies such as only betting for some fraction of possible R’s.)