The primary confusing thing about Newcomb’s problem is that we want to think of our decision as coming “before” the filling of the boxes, in spite of the fact that it physically comes after.
I’ve been puzzling in my amateurish way over Newcomb’s problem a bit. The way I think the causal flow goes is:
T0: Omega accurately simulates the agent at T1-T2, and fills the boxes accordingly.
T1: The agent deliberates about whether to one-box or two-box.
T2: The agent irrevocably commits to one-boxing or two-boxing.
The agent thinks there’s a paradox, because it feels like they’re making a choice at T1. In fact, they are not. To Omega, their behavior is as predictable as a one-line computer program. The “agent” does not choose to one-box or two-box. They are fated to one-box or two-box.
Understood this way, I don’t think the problem violates causality.
The problem that’s normally focused on is that we want to think our decision is independent, and not as predictable as stated in the problem. Once you get over that, there’s no more riddle, it’s just math.
Your causality diagram should start at T0: the configuration of the universe is such that there is no freedom at T2, and Omega knows enough about it to predict what will happen. And you’re correct, the problem doesn’t violate causality, it violates the free-will assumption behind the common versions of CDT.
Note: it’s just a thought experiment, and we need to be careful about updating on fiction. It doesn’t say anything about whether a human decision can be known by a real Omega, only that IF it could, that implies the decision isn’t made when we think it is.
Your causal description is incomplete; the loopy part requires expanding T1:
T0: Omega accurately simulates the agent at T1-T2, determines that the agent will one-box, and puts money in both of the boxes. Omega’s brain/processor contains a (near) copy of the part of the causal diagram at T1 and T2.
T1: The agent deliberates about whether to one-box or two-box. She draws a causal diagram on a piece of paper. It does not contain T1, because it isn’t really useful for her to model her own deliberation as she deliberates. But it does contain T2, and a shallow copy of T0, including the copy of T2 inside T0.
T2: The agent irrevocably commits to one-boxing.
The loopy part is at T1. Forward arrows mean “physically causes”, and backwards arrows mean “logically causes, via one part of the causal diagram being copied into another part”.
I love your analysis. What do you think about this summary? : The solution to this optimization problem is to be the kind of agent that chooses only one box.
I’ve been puzzling in my amateurish way over Newcomb’s problem a bit. The way I think the causal flow goes is:
T0: Omega accurately simulates the agent at T1-T2, and fills the boxes accordingly.
T1: The agent deliberates about whether to one-box or two-box.
T2: The agent irrevocably commits to one-boxing or two-boxing.
The agent thinks there’s a paradox, because it feels like they’re making a choice at T1. In fact, they are not. To Omega, their behavior is as predictable as a one-line computer program. The “agent” does not choose to one-box or two-box. They are fated to one-box or two-box.
Understood this way, I don’t think the problem violates causality.
The problem that’s normally focused on is that we want to think our decision is independent, and not as predictable as stated in the problem. Once you get over that, there’s no more riddle, it’s just math.
Your causality diagram should start at T0: the configuration of the universe is such that there is no freedom at T2, and Omega knows enough about it to predict what will happen. And you’re correct, the problem doesn’t violate causality, it violates the free-will assumption behind the common versions of CDT.
Note: it’s just a thought experiment, and we need to be careful about updating on fiction. It doesn’t say anything about whether a human decision can be known by a real Omega, only that IF it could, that implies the decision isn’t made when we think it is.
Your causal description is incomplete; the loopy part requires expanding T1:
T0: Omega accurately simulates the agent at T1-T2, determines that the agent will one-box, and puts money in both of the boxes. Omega’s brain/processor contains a (near) copy of the part of the causal diagram at T1 and T2.
T1: The agent deliberates about whether to one-box or two-box. She draws a causal diagram on a piece of paper. It does not contain T1, because it isn’t really useful for her to model her own deliberation as she deliberates. But it does contain T2, and a shallow copy of T0, including the copy of T2 inside T0.
T2: The agent irrevocably commits to one-boxing.
The loopy part is at T1. Forward arrows mean “physically causes”, and backwards arrows mean “logically causes, via one part of the causal diagram being copied into another part”.
I love your analysis. What do you think about this summary? : The solution to this optimization problem is to be the kind of agent that chooses only one box.