Monday and Tuesday awakening in one branch are causally connected. Different branches are not.
What if, as part of the thought experiment, we assume that the people running Sleeping Beauty make sure that Monday and Tuesday awakening are causally disconnected to the best of their abilities? (I.e., they try to ensure that nothing that Beauty does during each awakening can affect the world outside the experiment or persist to the next awakening.) Would that change your answers? (I.e., why can’t we the define P(Monday) to mean P(Monday awakening happens in this causal bubble) and so on?)
Maybe you reply that they can’t enforce causal disconnectedness between Monday and Tuesday with certainty, so Beauty still has to treat them as causally connected. But then we also can’t be sure that different Everett branches are causally disconnected with absolute certainty (that’s just what our current best theory says), so the two situations still seem analogous.
But what if p represents an indexical uncertainty, which is uncertainty about where (or when) you are in the world? In that case, what occurs at one location in the world can easily interact with what occurs at another location, either physically, or in one’s preferences. If there is physical interaction, then “consequences of a choice at a location” is ill-defined. If there is preferential interaction, then “utility of the consequences of a choice at a location” is ill-defined. In either case, it doesn’t seem possible to compute the utility of the consequences of a choice at each location separately and then combine them into a probability-weighted average.
Subsequent to writing that post, people also came up with the idea of “acausal interactions” as in acausal trade and extortion, which similarly violates the axiom of independence.
What if, as part of the thought experiment, we assume that the people running Sleeping Beauty make sure that Monday and Tuesday awakening are causally disconnected to the best of their abilities? (I.e., they try to ensure that nothing that Beauty does during each awakening can affect the world outside the experiment or persist to the next awakening.) Would that change your answers? (I.e., why can’t we the define P(Monday) to mean P(Monday awakening happens in this causal bubble) and so on?)
As long as previous and next awakenings on Tails are not statistically independent it wouldn’t. This is what matters here. By the definition of the experimental setting, when the coin is Tails, what Beauty does on Monday—awakens—always affects what she does on Tuesday—awakens the second time. Sequential events are definitely not mutually exclusive and thus can’t be elements of a sample space.
Now, we can remove this causality/correlation by doing a different experiment, where some number of SB experiments are simulated, then we get a list of awakenings and then select a random one of them, and put the Beauty through it. Then thirder model, which implicitly assumes that current awakening is randomly sampled, would be correct.
Subsequent to writing that post, people also came up with the idea of “acausal interactions” as in acausal trade and extortion, which similarly violates the axiom of independence.
The whole point of acausal trade is that it’s, well, acausal. Branches are mutually exclusive in probability theoretic terms, and yet you may choose to care about a different branch in terms of your utilities. This is not a problem for probability theory, because it doesn’t have utilities and complications that they add. Which is, once again, why it’s helpful to disentangle probability theory from decision theory and firstly solve the former before engaging with the latter.
What if, as part of the thought experiment, we assume that the people running Sleeping Beauty make sure that Monday and Tuesday awakening are causally disconnected to the best of their abilities? (I.e., they try to ensure that nothing that Beauty does during each awakening can affect the world outside the experiment or persist to the next awakening.) Would that change your answers? (I.e., why can’t we the define P(Monday) to mean P(Monday awakening happens in this causal bubble) and so on?)
Maybe you reply that they can’t enforce causal disconnectedness between Monday and Tuesday with certainty, so Beauty still has to treat them as causally connected. But then we also can’t be sure that different Everett branches are causally disconnected with absolute certainty (that’s just what our current best theory says), so the two situations still seem analogous.
See also “preferential interaction” from indexical uncertainty and the Axiom of Independence:
Subsequent to writing that post, people also came up with the idea of “acausal interactions” as in acausal trade and extortion, which similarly violates the axiom of independence.
As long as previous and next awakenings on Tails are not statistically independent it wouldn’t. This is what matters here. By the definition of the experimental setting, when the coin is Tails, what Beauty does on Monday—awakens—always affects what she does on Tuesday—awakens the second time. Sequential events are definitely not mutually exclusive and thus can’t be elements of a sample space.
Now, we can remove this causality/correlation by doing a different experiment, where some number of SB experiments are simulated, then we get a list of awakenings and then select a random one of them, and put the Beauty through it. Then thirder model, which implicitly assumes that current awakening is randomly sampled, would be correct.
The whole point of acausal trade is that it’s, well, acausal. Branches are mutually exclusive in probability theoretic terms, and yet you may choose to care about a different branch in terms of your utilities. This is not a problem for probability theory, because it doesn’t have utilities and complications that they add. Which is, once again, why it’s helpful to disentangle probability theory from decision theory and firstly solve the former before engaging with the latter.