all knowing agent with perfect powers of prediction
The existence of an all-knowing agent with perfect powers of prediction makes a mockery of the very idea of causality, at least as I understand it. (I won’t go into details here, because it doesn’t really matter, as you’ll see.) Obviously causal decision theory doesn’t work if causality doesn’t make sense. However, since I assign negligible probability to the existence of such a being, I can still think that CDT is correct for practical purposes, while remembering that it can break down in extreme situations.
However, this doesn’t really matter for your point, which is (in part) based on this principle:
I should have been asking which decision theory would lead to the greatest payoff.
So if we alter the story to make it compatible with causality (as Spurlock did), then the answer is still that CDT does not lead to the greatest payoff.
However (and now I’m finally getting to my point), this doesn’t mean that CDT is incorrect! Although it is normally beneficial to know the truth, there are situations in which it is beneficial (and therefore rational, in a decision-theoretic sense) to believe falsehoods, and this may be one of them. (But the positivist in me wants to object that the correctness of CDT, as distinct from the usefulness of belief in it, is not a matter of observable fact and therefore meaningless.)
So I still want to say that I should pick two boxes. But now (now being after discussion of Eliezer’s post on the subject) I add that I also should be the type of person who would pick one box, and furthermore this is more important (at least when Newcomb’s Problem is the only relevant situation), even if being such a person would lead me to mistakenly pick one box in fact.
The existence of an all-knowing agent with perfect powers of prediction makes a mockery of the very idea of causality, at least as I understand it. (I won’t go into details here, because it doesn’t really matter, as you’ll see.) Obviously causal decision theory doesn’t work if causality doesn’t make sense. However, since I assign negligible probability to the existence of such a being, I can still think that CDT is correct for practical purposes, while remembering that it can break down in extreme situations.
However, this doesn’t really matter for your point, which is (in part) based on this principle:
So if we alter the story to make it compatible with causality (as Spurlock did), then the answer is still that CDT does not lead to the greatest payoff.
However (and now I’m finally getting to my point), this doesn’t mean that CDT is incorrect! Although it is normally beneficial to know the truth, there are situations in which it is beneficial (and therefore rational, in a decision-theoretic sense) to believe falsehoods, and this may be one of them. (But the positivist in me wants to object that the correctness of CDT, as distinct from the usefulness of belief in it, is not a matter of observable fact and therefore meaningless.)
So I still want to say that I should pick two boxes. But now (now being after discussion of Eliezer’s post on the subject) I add that I also should be the type of person who would pick one box, and furthermore this is more important (at least when Newcomb’s Problem is the only relevant situation), even if being such a person would lead me to mistakenly pick one box in fact.