In my paper, I discuss how Full Non-Indexical Conditioning seems to break down if the universe is so large that someone with identical memories as you has a non-negligible chance of existing elsewhere in the universe. Note that this requires a VERY large universe—the size of universe we can actually observe in telescopes isn’t enough.
I go on to argue that although I don’t know how to resolve this issue, I think it’s likely that it has no relevance when addressing non-cosmological problems such as Sleeping Beauty or the Doomsday Argument. Sleeping Beauty in particular is only mildly fantastical (memory erasure) and is otherwise a mundane issue of local behaviour in our part of the universe. I don’t see why its solution should depend on whether the universe is large, very large, very VERY large, or infinite. I expect that even if Full Non-Indexical Conditioning needs to be modified somehow to cope with really large universes, the modification will not change the result for Sleeping Beauty. It’s sort of how physicists in 1850 probably realized there were a few puzzles regarding light and Newtonian physics, but nevertheless thought, correctly, that the resolution of those puzzles wouldn’t change the answers to questions of when bridges will collapse.
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend onN: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
Fair enough. I just thought it was a kind of weird thing for a theory to be sensitive to. I guess the theory is self-consistent although it’s not clear to me how well it matches with the intuitive concept of “probability”.
In my paper, I discuss how Full Non-Indexical Conditioning seems to break down if the universe is so large that someone with identical memories as you has a non-negligible chance of existing elsewhere in the universe. Note that this requires a VERY large universe—the size of universe we can actually observe in telescopes isn’t enough.
I go on to argue that although I don’t know how to resolve this issue, I think it’s likely that it has no relevance when addressing non-cosmological problems such as Sleeping Beauty or the Doomsday Argument. Sleeping Beauty in particular is only mildly fantastical (memory erasure) and is otherwise a mundane issue of local behaviour in our part of the universe. I don’t see why its solution should depend on whether the universe is large, very large, very VERY large, or infinite. I expect that even if Full Non-Indexical Conditioning needs to be modified somehow to cope with really large universes, the modification will not change the result for Sleeping Beauty. It’s sort of how physicists in 1850 probably realized there were a few puzzles regarding light and Newtonian physics, but nevertheless thought, correctly, that the resolution of those puzzles wouldn’t change the answers to questions of when bridges will collapse.
I think a variation of my approach to resolving the betting argument for SB can also help deal with the very large universe problem. I’ve taken a look at the following setup:
There are N Experimenters scattered throughout the universe, where N is very, very large. Each Experimenter tries to determine which of two hypotheses A and B about the universe are correct by running some experiment and collection some data. Let d be the data collected, and let y be the remaining information (experiences, memories) that could distinguish this Experimenter from others.
It is possible to choose N so large that the prior probability approaches one that there will be some Experimenter with that particular d and y , regardless of whether A or B is true. This means that the Experimenter’s posterior probability for A versus B will update only slightly from its prior probability.
And yet if the Experimenter has to make a choice based on whether A or B is true, and we weight the payoffs according to how many Experimenters there are with the same y and d (as done in my analysis for SB), then the maximum-expected-utility answer does not depend on N: from the standpoint of decision-making, we can ignore the possibility of all those other Experimenters and just assume N=1.
Interesting. I guess for this to work, one has to have what one might call a non-indexical morality—one that might favour people very, very much like you over others, but that doesn’t favour YOU (whatever that means) over other nearly-identical people. (i”m going for “nearly-identical” over “identical”, since I’m not sure what it means for there to be several people who are identical.) It seems odd that morality should have anything to do with probability, but maybe it does....
Fair enough. I just thought it was a kind of weird thing for a theory to be sensitive to. I guess the theory is self-consistent although it’s not clear to me how well it matches with the intuitive concept of “probability”.