Very clear argument and many good points. Appreciate the effort.
Regarding your position on thirders vs halfers, I think it is a completely reasonable position and I agree with the analysis about when halfers are correct and when thirders are correct. However to me it seems to treat Sleeping Beauty more as a decision making problem rather than a probability problem. Maybe one’s credence without relating consequences is not defined. However that seems counter intuitive to me. Naturally one should have a belief about the situation and her decisions should depend on it as well as her objective (how much beauty cases about other copies) and the payoff structure (is the money reward depends only on her own answer, or all correct answers or accuracy rate etc). If that’s the case, there should exist a unique correct answer to the problem.
About how should beauty estimate R and treat the samples, I would say that’s the best position for a thirder to take. In fact that’s the same position I would take too. If I may reword it slightly, see if you agrees with this version: The 8 rooms is a unbiased sample for beauty, that is too obvious to argue otherwise. Her own room is always red so the 9 rooms is obviously biased for her. However from (an imaginary) selector’s perspective if he finds the same 9 rooms it is an unbiased sample. Thirders think she should answer from the selector’s perspective, (I think the most likely reason being she is repeatedly memory wiped makes her perspective somewhat “compromised”) therefore she would estimate R to be 27. Is this version something you would agree?
In this version I highlighted the disagreement between the selector and beauty, the disagreement is not some numerical value but they disagree on whether a sample is biased. In my 4 posts all I’m trying to do is arguing for the validity and importance of perspective disagreement. If we recognize the existence of this disagreement and let each agent answers from her own perspective we get another system of reasoning different from SIA or SSA. It provides an argument for double halving, give a framework where frequentist and bayesians agrees with each other, reject Doomsday Argument, disagree with Presumptuous Philosopher, and rejects the Simulation Argument. I genuinely think this is the explanation to sleeping beauty problem as well as many problems related to anthropic reasoning. Sadly only the part arguing against thirding gets some attention.
Anyways, I digressed. Bottomline is, though I do no think it is the best position, I feel your argument is reasonable and well thought. I can understand it if people want to take it as their position.
First thing I want to say is that I do not have a mathematics or philosophy degree. I come from an engineering background. I consider myself as a hobbyist rationalist. English is not my first language, so pease forgive me when I make grammar mistakes.
The reason I’ve come to LW is because I believe I have something of value to contribute to the discussion of the Sleeping Beauty Problem. I tried to get some feedback by posting on reddit, however maybe due to the length of it I get few responses. I find LW through google and the discussion here is much more in depth and rigorous. So I’m hoping to get some critiques on my idea.
My main argument is that in case of the sleeping beauty problem, agents free to communicate thus having identical information can still rightfully have different credence to the same proposition. This disagreement is purely caused by the difference in their perspective. And due to this perspective disagreement, SIA and SSA are both wrong because they are answering the question from an outside “selector” perspective which is different from beauty’s answer. I concluded that the correct answer should be double-halving.
Because I’m new and cannot start a new discussion thread I’m posting the first part of my argument here see if anyone is interested. Also my complete argument can be found at www.sleepingbeautyproblem.com
Consider the following experiment:
Duplicating Beauty (DB)
Beauty falls asleep as usual. The experimenter tosses a fair coin before she wakes up. If the coin landed on T then a perfect copy of beauty will be produced. The copy is precise enough that she cannot tell if herself is old or new. If the coin landed on H then no copy will be made . The beauty(ies) will then be randomly put into two identical rooms. At this point another person, let’s call him the Selector, randomly chooses one of the two rooms and enters. Suppose he saw a beauty in the chosen room. What should the credence for H be for the two of them?
For the Selector this is easy to calculate. Because he is twice more likely to see a beauty in the room if T, simple bayesian updating gives us his probability for H as 1⁄3.
For Beauty, her room has the same chance of being chosen (1/2) regardless if the coin landed on H or T. Therefore seeing the Selector gives her no new information about the coin toss. So her answer should be the same as in the original SBP. If she is a halfer 1⁄2, if she is a thirder 1⁄3.
This means the two of them would give different answers according to halfers and would give the same answer according to thirders. Notice here the Selector and Beauty can freely communicate however they want, they have the same information regarding the coin toss. So halving would give rise to a perspective disagreement even when both parties share the same information.
This perspective disagreement is something unusual (and against Aumann’s Agreement Theorem), so it could be used as an evidence against halving thus supporting Thirdrism and SIA. I would show the problems of SIA in the another thought experiment. For now I want to argue that this disagreement has a logical reason.
Let’s take a frequentist’s approach and see what happens if the experiment is repeated, say 1000 times. For the Selector, this simply means someone else go through the potential cloning 1000 times and each time he would chooses a random room. On average there would be 500 H and T. He would see a beauty for all 500 times after T and see a beauty 250 times after H. Meaning out of the 750 times 1⁄3 of which would be H. Therefore he is correct in giving 1⁄3 as his answer.
For beauty a repetition simply means she goes through the experiment and wake up in a random room awaiting the Selector’s choice again. So by her count, taking part in 1000 repetitions means she would recall 1000 coin tosses after waking up. In those 1000 coin tosses there should be about 500 of H and T each. She would see the Selector about 500 times with equal numbers after T or H. Therefore her answer of 1⁄2 is also correct from her perspective.
If we call the creation of a new beauty a “branch off”, here we see that from Selector’s perspective experiments from all branches are considered a repetition. Where as from Beauty’s perspective only experiment from her own branch is counted as a repetition. This difference leads to the disagreement.
This disagreement can also be demonstrated by betting odds. In case of T, choosing any of the two rooms leads to the same observation for the Selector: he always sees a beauty and enters another bet. However, for the two beauties the Selector’s choice leads to different observations: whether or not she can see him and enters another bet. So the Selector is twice more likely to enter a bet than any Beauty in case of T, giving them different betting odds respectively.
The above reasoning can be easily applied to original SBP. Conceptually it is just an experiment where its duration is divided into two parts by a memory wipe in case of T. The exact duration of the experiment, whether it is two days or a week or five years, is irrelevant. Therefore from beauty’s perspective to repeat the experiment means her subsequent awakenings need to be shorter to fit into her current awakening. For example, if in the first experiment the two possible awakenings happen on different days, then the in the next repetition the two possible awakening can happen on morning and afternoon of the current day. Further repetitions will keep dividing the available time. Theoretically it can be repeated indefinitely in the form of a supertask. By her count half of those repetitions would be H. Comparing this with an outsider who never experiences a memory wipe: all repetitions from those two days are equally valid repetitions. The disagreement pattern remains the same as in the DB case.
PS: Due to the length of it I’m breaking this thing into several parts. The next part would be a thought experiment countering SIA and Thirdism. Which I would post in a few days if anyone’s interested.