Everyone agrees what you should do if you can precommit. The question becomes philosophically interesting when an agent faces this problem without having had the opportunity to precommit.
“You decide either “I am a one-boxer” or “I am a two-boxer,” the boxes get filled according to a rule, and then you pick deterministically according to a rule. It’s all forward reasoning; it’s just a bit weird because the action in question happens way before you are faced with the boxes.”
I realize now that this expressed as a DAG looks identical to precommitment.
Except, I also think it’s a faithful representation of the typical Newcomb scenario.
Paradox only arises if you can say “I am a two-boxer” (by picking up two boxes) while you were predicted to be a one-boxer. This can only happen if there are multiple nodes for two-boxing set to different values.
But really, this is a problem of the kind solved by superspecs in my Onward! paper. There is a constraint that the prediction of two-boxing must be the same as the actual two-boxing. Traditional causal DAGs can only express this by making them literally the same node; super-specs allow more flexibility. I am unclear how exactly it’s handled in FDT, but it has a similar analysis of the problem (“CDT breaks correlations”).
Everyone agrees what you should do if you can precommit. The question becomes philosophically interesting when an agent faces this problem without having had the opportunity to precommit.
Okay, I see how that technique of breaking circularity in the model looks like precommitment.
I still don’t see what this has to do with counterfactuals though.
“You decide either “I am a one-boxer” or “I am a two-boxer,” the boxes get filled according to a rule, and then you pick deterministically according to a rule. It’s all forward reasoning; it’s just a bit weird because the action in question happens way before you are faced with the boxes.”
So you wouldn’t class this as precommitment?
I realize now that this expressed as a DAG looks identical to precommitment.
Except, I also think it’s a faithful representation of the typical Newcomb scenario.
Paradox only arises if you can say “I am a two-boxer” (by picking up two boxes) while you were predicted to be a one-boxer. This can only happen if there are multiple nodes for two-boxing set to different values.
But really, this is a problem of the kind solved by superspecs in my Onward! paper. There is a constraint that the prediction of two-boxing must be the same as the actual two-boxing. Traditional causal DAGs can only express this by making them literally the same node; super-specs allow more flexibility. I am unclear how exactly it’s handled in FDT, but it has a similar analysis of the problem (“CDT breaks correlations”).