“Fake Options” in Newcomb’s Problem

This is an ex­plo­ra­tion of a way of look­ing at New­comb’s Prob­lem that helped me un­der­stand it. I hope some­body else finds it use­ful. I may add dis­cus­sions of other game the­ory prob­lems in this for­mat if any­body wants them.

Con­sider New­comb’s Prob­lem:: Omega offers you two boxes, one trans­par­ent and con­tain­ing $1000, the other opaque and con­tain­ing ei­ther $1 mil­lion or noth­ing. Your op­tions are to take both boxes, or only take the sec­ond one; but Omega has put money in the sec­ond box only if it has pre­dicted that you will only take 1 box. A per­son in fa­vor of one-box­ing says, “I’d rather have a mil­lion than a thou­sand.” A two-boxer says, “Whether or not box B con­tains money, I’ll get $1000 more if I take box A as well. It’s ei­ther $1001000 vs. $1000000, or $1000 vs. noth­ing.” To get to these differ­ent de­ci­sions, the agents are work­ing from two differ­ent ways of vi­su­al­is­ing the pay­off ma­trix. The two-boxer sees four pos­si­ble out­comes and the one-boxer sees two, the other two hav­ing very low prob­a­bil­ity.

The two-boxer’s pay­off ma­trix looks like this:

Box B

|Money | No money|

De­ci­sion 1-box| $1mil | $0 |

2-box | $1001000| $1000 |

The out­comes $0 and $1001000 both re­quire Omega mak­ing a wrong pre­dic­tion. But as the prob­lem is for­mu­lated, Omega is su­per­in­tel­li­gent and has been right 100 out of 100 times so far. So the one-boxer, tak­ing this into ac­count, de­scribes the pay­off ma­trix like this:

Box B

|Money | No money|

De­ci­sion 1-box| $1mil | not pos­si­ble|

2-box | not pos­si­ble| $1000 |

If Omega is re­ally a perfect (nearly perfect) pre­dic­tor, the only pos­si­ble (not hugely un­likely) out­comes are $1000 for two-box­ing and $1 mil­lion for one-box­ing, and con­sid­er­ing the other out­comes is an epistemic failure.