I’m referring to an example from here: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf where you do wake up both days.
Your argument seemed similar, but I may be misunderstanding:
“Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one.”
It sounds like you are conditioning on “at least once such experiences occur”. That is, if Beauty wakes up and flips a coin, getting heads, and that’s the only experience she has so far, she will condition on “at least one heads.” This doesn’t seem generally correct, as the linked example covers. Doesn’t it also mean that, even before the coin flip, she would know exactly how she was going to update her probability afterward, regardless of result?
Perhaps the issue here is that if you wake up and flip heads, that isn’t the same thing as if, on Sunday, you asked “will I flip at least one heads?” and got an affirmative answer. The latter is relevant to the number of wakings. The former is not.
I don’t understand the reasoning for using irrelevant information.
If you are saying that there is twice the probability of experiencing y “at least once” on tails, doesn’t that fail for the same argument Conitzer gave against halfers? His example was that you wake up both days and flip a coin. If you flip heads, what is the probability that both flips are the same? You are twice as likely to experience heads at least once if the coin tosses are different. But it is irrelevant. The probability of “both the same” is still 1⁄2.
On the other hand, in reality there might be some relevant information (such as noticeable aging, hunger, etc) but the problem is meant to exclude that.