Observers who would only exist in logically impossible worlds can’t make bets, so the “collective sucker” arguments don’t really work.
I’m confused as to how and whether this property of “logical possibility” leaks through to affect the rationality of our decisions. I mean, people in non-actual possible worlds don’t actually make bets, either. So if we don’t care about bets that can’t possibly get made, why would we care about bets that don’t actually get made?
It seems to me that I can rationally take favorable bets on far-out digits of pi, even though it’s conceivable that it makes all logically possible versions of me worse off. If I bet that some far-out digit is 7, at 1-100 odds, then I should even expect that probably, I will make all possible versions of myself worse off; and yet it’s still the rational thing to do. So how is it different when you maximize EU through anthropic reasoning?
You should probably cite Bostrom for the Presumptuous Philosopher thought experiment.
However, there is a model in which anthropics behaves differently with respect to logical and empirical uncertainty: the multiverse theory. Here, every logically possible world is actualized, so there really are more copies of you in places where the local values of empirical variables are such that more copies are produced.
It seems to me that I can rationally take favorable bets on far-out digits of pi, even though it’s conceivable that it makes all logically possible versions of me worse off
sure, but that’s not quite the same structure of reasoning as in the example I gave. Suppose that someone asks you to bet on whether RH is true or not, and your mathematical intuition says 50⁄50. Then they present the argument that RH ==> many many more observers. Do your odds now change?
Reasoning from one empirical thing to another going via a logical question is different to just directly speculating about the logical question.
I’m confused as to how and whether this property of “logical possibility” leaks through to affect the rationality of our decisions. I mean, people in non-actual possible worlds don’t actually make bets, either. So if we don’t care about bets that can’t possibly get made, why would we care about bets that don’t actually get made?
It seems to me that I can rationally take favorable bets on far-out digits of pi, even though it’s conceivable that it makes all logically possible versions of me worse off. If I bet that some far-out digit is 7, at 1-100 odds, then I should even expect that probably, I will make all possible versions of myself worse off; and yet it’s still the rational thing to do. So how is it different when you maximize EU through anthropic reasoning?
You should probably cite Bostrom for the Presumptuous Philosopher thought experiment.
I must admit that I am also somewhat confused.
However, there is a model in which anthropics behaves differently with respect to logical and empirical uncertainty: the multiverse theory. Here, every logically possible world is actualized, so there really are more copies of you in places where the local values of empirical variables are such that more copies are produced.
sure, but that’s not quite the same structure of reasoning as in the example I gave. Suppose that someone asks you to bet on whether RH is true or not, and your mathematical intuition says 50⁄50. Then they present the argument that RH ==> many many more observers. Do your odds now change?
Reasoning from one empirical thing to another going via a logical question is different to just directly speculating about the logical question.