Newcomb’s problem is important because it led (may be not directly, but I think it contributed) to the schism into evidential decision theory and causal decision theory.
As far as I can tell, that’s because the causal decision theorists are crippled by using magicless thinking in a magical problem. The only outcome is “huh, people who use all the information provided by a problem do better than people who ignore some of the information!” As schisms go, that seems pretty tame.
That does make it clearer why I’m a 0-boxer and uninterested by it, and suggests I should refrain from approaching it on a level as intense as Eliezer’s paper until I am interested in formality, as a correct one-page explanation is unlikely to be formal and the reason the problem is interesting is in its formality.
As far as I can tell, that’s because the causal decision theorists are crippled by using magicless thinking in a magical problem. The only outcome is “huh, people who use all the information provided by a problem do better than people who ignore some of the information!” As schisms go, that seems pretty tame.
The issue is expressing formally the algorithm which uses all the information to get the right answer in Newcomb’s.
That does make it clearer why I’m a 0-boxer and uninterested by it, and suggests I should refrain from approaching it on a level as intense as Eliezer’s paper until I am interested in formality, as a correct one-page explanation is unlikely to be formal and the reason the problem is interesting is in its formality.