I remember you linked me to Radford Neal’s paper (pdf) on Full Non-indexical Conditioning. I think FNC is a much nicer way to think about problems like these than SSA and SIA, but I guess you disagree?
To save others from having to wade through the paper, I’ll try to explain relatively briefly what FNC means:
First, let’s consider a much simpler instance of the Doomsday Argument: At the beginning of time, God tosses a coin. If heads then there will only ever be one person (call them “M”), who is created, matures and dies on Monday, and then the world ends. If tails then there will be two people, one (“M”) who lives and dies on Monday and another (“T”) on Tuesday. As this is a Doomsday Argument, we don’t require that T is a copy of M.
M learns that it’s Monday but is given no (other) empirical clues about the coin. M says to herself “Well, if the coin is heads then I was certain to find myself here on Monday, but if it’s tails then there was a 1⁄2 chance that I’d find myself experiencing a Tuesday. Applying Bayes’ theorem, I deduce that there’s a 2⁄3 chance that the coin is heads, and and that the world is going to end before tomorrow.”
Now FNC makes two observations:
The event on which we were updating, “it is Monday today”, is indexical. However, an “indexical event” isn’t strictly speaking an event. (Because an event picks out a set of possible worlds, whereas an indexical event picks out a set of possible “centered worlds”.) Since it isn’t an event, we can’t update on it.
(But apart from that) the best way to do an update is to update on everything we know.
M takes these points to heart. Rather than updating on “it is Monday” she instead updates on “there once was a person who experienced this [complete catalogue of M’s mental state] and that person lived on Monday.”
If we ignore the (at best) remote possibility that T has exactly the same experiences as M (prior to learning which day it is) then the event above is independent of the coin toss. Therefore M should calculate a posterior probability of 1⁄2 that the coin is heads.
On discovering that it’s Monday, M gains no evidence that the end of the world is nigh. Notice that we’ve reached this conclusion independently of decision theory.
If M is ‘altruistic’ towards T, valuing him as much as she values herself, then she should be prepared to part with one cube of chocolate in exchange for a guarantee that he’ll get two if he exists. If M is ‘selfish’ then the exchange rate changes from 1:2 to 1:infinity. These exchange rates are not probabilities. It would be very wrong to say something like “the probability that M gives to T’s existence only makes sense when we specify M’s utility function, and it in particular it changes from 1⁄2 to 0 if M switches from ‘altruistic’ to ‘selfish’”.
I remember you linked me to Radford Neal’s paper (pdf) on Full Non-indexical Conditioning. I think FNC is a much nicer way to think about problems like these than SSA and SIA, but I guess you disagree?
To save others from having to wade through the paper, I’ll try to explain relatively briefly what FNC means:
First, let’s consider a much simpler instance of the Doomsday Argument: At the beginning of time, God tosses a coin. If heads then there will only ever be one person (call them “M”), who is created, matures and dies on Monday, and then the world ends. If tails then there will be two people, one (“M”) who lives and dies on Monday and another (“T”) on Tuesday. As this is a Doomsday Argument, we don’t require that T is a copy of M.
M learns that it’s Monday but is given no (other) empirical clues about the coin. M says to herself “Well, if the coin is heads then I was certain to find myself here on Monday, but if it’s tails then there was a 1⁄2 chance that I’d find myself experiencing a Tuesday. Applying Bayes’ theorem, I deduce that there’s a 2⁄3 chance that the coin is heads, and and that the world is going to end before tomorrow.”
Now FNC makes two observations:
The event on which we were updating, “it is Monday today”, is indexical. However, an “indexical event” isn’t strictly speaking an event. (Because an event picks out a set of possible worlds, whereas an indexical event picks out a set of possible “centered worlds”.) Since it isn’t an event, we can’t update on it.
(But apart from that) the best way to do an update is to update on everything we know.
M takes these points to heart. Rather than updating on “it is Monday” she instead updates on “there once was a person who experienced this [complete catalogue of M’s mental state] and that person lived on Monday.”
If we ignore the (at best) remote possibility that T has exactly the same experiences as M (prior to learning which day it is) then the event above is independent of the coin toss. Therefore M should calculate a posterior probability of 1⁄2 that the coin is heads.
On discovering that it’s Monday, M gains no evidence that the end of the world is nigh. Notice that we’ve reached this conclusion independently of decision theory.
If M is ‘altruistic’ towards T, valuing him as much as she values herself, then she should be prepared to part with one cube of chocolate in exchange for a guarantee that he’ll get two if he exists. If M is ‘selfish’ then the exchange rate changes from 1:2 to 1:infinity. These exchange rates are not probabilities. It would be very wrong to say something like “the probability that M gives to T’s existence only makes sense when we specify M’s utility function, and it in particular it changes from 1⁄2 to 0 if M switches from ‘altruistic’ to ‘selfish’”.