They then say, however, that they have a credence of 1 in the world state where there is $0 in the second box. Given this credence, the smart decision is to two-box (and get $1000) rather than 1-box (and get $0).
Ah, right, they never expect anything to be in the opaque box, so for them taking the opaque box is basically redundant (“might as well, no harm can come from it”). So they correctly assign the probability of zero to the event “I’m a two-boxer and there is $1M to be had”.
However, this is supplemented by “CDTer must two-box” because “predictor’s choice has already been made”, as if this choice is independent of what they decide. This strange loop can only be unwound by considering how the predictor might know what they will decide before they think that they decided something. And this requires taking the outside view and going into the free-will analysis.
Yeah, that’s right—so I think the proponent of CDT can be criticised for all sorts of reasons but I don’t think they’re (straight-forwardly) inconsistent.
As a note, the decision in NP is whether to take the opaque and the transparent box or whether to just take the opaque box—so the CDT agent doesn’t just think they “may as well” two-box, they think they’re actively better to do so because doing so will gain them the $1000 in the transparent box.
And yes, I agree that considerations of free will are relevant to NP. People have all sorts of opinions about what conclusion we should draw from these considerations and how important they are.
It is, however, quite frustrating to realize, in retrospect, that I had already gone through this chain of reasoning at least once, and then forgot it completely :(
Ah, right, they never expect anything to be in the opaque box, so for them taking the opaque box is basically redundant (“might as well, no harm can come from it”). So they correctly assign the probability of zero to the event “I’m a two-boxer and there is $1M to be had”.
However, this is supplemented by “CDTer must two-box” because “predictor’s choice has already been made”, as if this choice is independent of what they decide. This strange loop can only be unwound by considering how the predictor might know what they will decide before they think that they decided something. And this requires taking the outside view and going into the free-will analysis.
Yeah, that’s right—so I think the proponent of CDT can be criticised for all sorts of reasons but I don’t think they’re (straight-forwardly) inconsistent.
As a note, the decision in NP is whether to take the opaque and the transparent box or whether to just take the opaque box—so the CDT agent doesn’t just think they “may as well” two-box, they think they’re actively better to do so because doing so will gain them the $1000 in the transparent box.
And yes, I agree that considerations of free will are relevant to NP. People have all sorts of opinions about what conclusion we should draw from these considerations and how important they are.
OK, thanks for clearing this CDT self-consistency stuff up for me.
That’s cool, glad I had something useful to say (and it’s nice to know we weren’t just talking at cross purposes but were actually getting somewhere!)
It is, however, quite frustrating to realize, in retrospect, that I had already gone through this chain of reasoning at least once, and then forgot it completely :(