This is a very nice post, that has clarified my understanding a lot.
Previously I thought that it was just “per experiment” vs “per awakening” being underspecified in the problem. But you are completely correct that when we consider “per awakening” then its not really acceptable to treat it as random when consecutive awakenings are correlated.
I assume that the obvious extension to some of the anthropic thought experiments where I am copied also holds? For example: a coin is flicked, on heads I wake up in a room, on tails 1E6 identical copies of me wake up in separate rooms. I don’t reason “its almost certainly tails because I am in a room.”, I instead reason “the two options were heads&awake, and heads(&awake)^E6, two options: so its 50⁄50. [I can still legitimately decide that I care more about worlds where more of me exist, and act accordingly, but that is a values argument, not a probability one.]
I assume that the obvious extension to some of the anthropic thought experiments where I am copied also holds?
Broadly yes. I’ve been briefly talking about such cases in this post and the next one. But be mindful, the experiment where you may or may not be separated into multiple people is not exactly isomorphic to Sleeping Beauty, despite what traditional discourse about anthropics migh make you think. In Sleeping Beauty on Tails the same participants goes through both awakenings, while here different people experience awakenings in different rooms. Causal graphs are different. So you actually are able to reason about a specific instance of you in a particular room.
Suppose that on Heads you awaken in Room 1, but on Tails you are split into to people who awake in Room 1 and Room 2. Being awaken is no evidence one way or the other. But knowing that you are in Room 1 - is evidence in favor of Heads. Here Lewis’ model actually is a good fit.
This is a very nice post, that has clarified my understanding a lot.
Previously I thought that it was just “per experiment” vs “per awakening” being underspecified in the problem. But you are completely correct that when we consider “per awakening” then its not really acceptable to treat it as random when consecutive awakenings are correlated.
I assume that the obvious extension to some of the anthropic thought experiments where I am copied also holds? For example: a coin is flicked, on heads I wake up in a room, on tails 1E6 identical copies of me wake up in separate rooms. I don’t reason “its almost certainly tails because I am in a room.”, I instead reason “the two options were heads&awake, and heads(&awake)^E6, two options: so its 50⁄50. [I can still legitimately decide that I care more about worlds where more of me exist, and act accordingly, but that is a values argument, not a probability one.]
You are most welcome!
Broadly yes. I’ve been briefly talking about such cases in this post and the next one. But be mindful, the experiment where you may or may not be separated into multiple people is not exactly isomorphic to Sleeping Beauty, despite what traditional discourse about anthropics migh make you think. In Sleeping Beauty on Tails the same participants goes through both awakenings, while here different people experience awakenings in different rooms. Causal graphs are different. So you actually are able to reason about a specific instance of you in a particular room.
Suppose that on Heads you awaken in Room 1, but on Tails you are split into to people who awake in Room 1 and Room 2. Being awaken is no evidence one way or the other. But knowing that you are in Room 1 - is evidence in favor of Heads. Here Lewis’ model actually is a good fit.