Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
I think your intuitions lead you astray at exactly this point.
Suppose that the 1000 of you are randomly ‘tagged’ with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707.
If all the clones remember to apply anthropic reasoning (assuming for argument’s sake that my current line of reasoning is ‘anthropic’) then you can easily work out that the probability of the observed event “number 707 is an anthropic reasoner in a green room” is 1/1000 if coin was heads or 999/1000 if coin was tails.
However, if 998 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
However, if 999 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1001, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won’t tell you anything extra about that.
(Note: I got the numbers slightly wrong—the 1001s should have been 1000s etc.)
Yes: If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
On the other hand, if the coin was tails then the probability of “clone #707 is in a green room” is 999/1000. However, clone #707 also knows that “clone #707 is an AR”, and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you’re taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000.
and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Why shouldn’t it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
I think your intuitions lead you astray at exactly this point.
Suppose that the 1000 of you are randomly ‘tagged’ with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707.
If all the clones remember to apply anthropic reasoning (assuming for argument’s sake that my current line of reasoning is ‘anthropic’) then you can easily work out that the probability of the observed event “number 707 is an anthropic reasoner in a green room” is 1/1000 if coin was heads or 999/1000 if coin was tails.
However, if 998 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won’t tell you anything extra about that.
(Note: I got the numbers slightly wrong—the 1001s should have been 1000s etc.)
Yes: If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
On the other hand, if the coin was tails then the probability of “clone #707 is in a green room” is 999/1000. However, clone #707 also knows that “clone #707 is an AR”, and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you’re taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000.
Why shouldn’t it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
Doh! Looks like I was reasoning about something I made up myself rather than Jordan’s comment.