Causal decision theory is a subset of evidential decision theory. We have much evidence that information flows from past to future. If we observe new evidence that information flows the other direction or the world works a different way than we think which allows Omega (or anyone else) to repeatedly react to the future before it happens, then we should give more weight to other parts of decision theory than causal. Depending on what we observe, our thoughts can move gradually between the various types of decision theory, using evidential decision theory as the meta-algorithm to choose the weighting of the other algorithms.
Observations are all we have. Observations may be that information flows past to future, or they may be that Omega predicts accurately, or some combination. In this kind of decision theory, estimate the size of the evidence for each kind of decision theory.
The evidence for causal theory is large but can be estimated as the log_base_2 of the number of synapses in a Human brain (10^15) multiplied by http://en.wikipedia.org/wiki/Dunbar%27s_number (150). The evidence may be more, but that is a limit on how advanced a thing any size group of people can learn (without changing how we learn). That result is around 57 bits.
The game played in Newcomb’s Paradox has 2 important choices: one-boxing and two-boxing, so I used log base 2. Combining the evidence from all previous games and other ways Newcomb’s Paradox is played, if the evidence that Omega is good at predicting builds up to exceed 57 bits, then in choices related to that, I would be more likely to one-box. If there have only been 56 observations and in all of them two-boxing lost or one-boxing won, then I would be more likely to two-box because there are more observations that information flows past to future and Omega doesn’t know what I will do.
The Newcomb Threshold of 57 is only an estimate for a specific Newcomb problem. For each choice, we should reconsider the evidence for the different kinds of decision theory so we can learn to win Newcomb games more often than we lose.
This is my solution to Newcomb’s Paradox.
Causal decision theory is a subset of evidential decision theory. We have much evidence that information flows from past to future. If we observe new evidence that information flows the other direction or the world works a different way than we think which allows Omega (or anyone else) to repeatedly react to the future before it happens, then we should give more weight to other parts of decision theory than causal. Depending on what we observe, our thoughts can move gradually between the various types of decision theory, using evidential decision theory as the meta-algorithm to choose the weighting of the other algorithms.
Observations are all we have. Observations may be that information flows past to future, or they may be that Omega predicts accurately, or some combination. In this kind of decision theory, estimate the size of the evidence for each kind of decision theory.
The evidence for causal theory is large but can be estimated as the log_base_2 of the number of synapses in a Human brain (10^15) multiplied by http://en.wikipedia.org/wiki/Dunbar%27s_number (150). The evidence may be more, but that is a limit on how advanced a thing any size group of people can learn (without changing how we learn). That result is around 57 bits.
The game played in Newcomb’s Paradox has 2 important choices: one-boxing and two-boxing, so I used log base 2. Combining the evidence from all previous games and other ways Newcomb’s Paradox is played, if the evidence that Omega is good at predicting builds up to exceed 57 bits, then in choices related to that, I would be more likely to one-box. If there have only been 56 observations and in all of them two-boxing lost or one-boxing won, then I would be more likely to two-box because there are more observations that information flows past to future and Omega doesn’t know what I will do.
The Newcomb Threshold of 57 is only an estimate for a specific Newcomb problem. For each choice, we should reconsider the evidence for the different kinds of decision theory so we can learn to win Newcomb games more often than we lose.