Posting here to avoid introducing an irrelevant aside on one of the main [ETA: Discussion-main, not Main-main] threads, regarding the “retrocausality” of Newcomb-like problems.
Causality is always bidirectional. It is information which only goes in one direction. Once you dissolve that distinction, the question is one of information, which doesn’t need to involve any strange loops at all; the behavior of Newcomb-like problems isn’t produced by your actions changing history, but by information about what your action will be changing the future, or having already changed it.
What’s lost is that this kind of forward-facing information is at play all the time; pretty much every single one of us constantly does low-level prediction of everyone around them (avoiding running into people by predicting where they are going to walk, for a particularly low-level example), and usually (but not always) gets it right. What’s unusual in Newcomb-like problems isn’t the predictive element, but the uncanny accuracy of that prediction. But if somebody could predict with uncanny accuracy somebody’s behavior in the future, we wouldn’t resort to retrocausality as an explanation, but rather very good predictive power.
Which is to say, once you notice there is nothing particularly interesting or necessarily reality-violating happening in Newcomb, the issue dissolves substantially, and two-boxing as a statement of the value of human autonomy is about as meaningful as turning suddenly and walking into traffic because the fact that the drivers of cars anticipate that you aren’t going to do that is some kind of affront to human dignity.
Posting here to avoid introducing an irrelevant aside on one of the main [ETA: Discussion-main, not Main-main] threads, regarding the “retrocausality” of Newcomb-like problems.
Causality is always bidirectional. It is information which only goes in one direction. Once you dissolve that distinction, the question is one of information, which doesn’t need to involve any strange loops at all; the behavior of Newcomb-like problems isn’t produced by your actions changing history, but by information about what your action will be changing the future, or having already changed it.
What’s lost is that this kind of forward-facing information is at play all the time; pretty much every single one of us constantly does low-level prediction of everyone around them (avoiding running into people by predicting where they are going to walk, for a particularly low-level example), and usually (but not always) gets it right. What’s unusual in Newcomb-like problems isn’t the predictive element, but the uncanny accuracy of that prediction. But if somebody could predict with uncanny accuracy somebody’s behavior in the future, we wouldn’t resort to retrocausality as an explanation, but rather very good predictive power.
Which is to say, once you notice there is nothing particularly interesting or necessarily reality-violating happening in Newcomb, the issue dissolves substantially, and two-boxing as a statement of the value of human autonomy is about as meaningful as turning suddenly and walking into traffic because the fact that the drivers of cars anticipate that you aren’t going to do that is some kind of affront to human dignity.