Formalizing Newcomb’s

This post was in­spired by taw urg­ing us to math­e­ma­tize New­comb’s prob­lem and Eliezer tel­ling me to post stuff I like in­stead of com­plain­ing.

To make New­comb’s prob­lem more con­crete we need a work­able model of Omega. Let me count the ways:

1) Omega reads your de­ci­sion from the fu­ture us­ing a time loop. In this case the con­tents of the boxes are di­rectly causally de­ter­mined by your ac­tions via the loop, and it’s log­i­cal to one-box.

2) Omega simu­lates your de­ci­sion al­gorithm. In this case the de­ci­sion al­gorithm has in­dex­i­cal un­cer­tainty on whether it’s be­ing run in­side Omega or in the real world, and it’s log­i­cal to one-box thus mak­ing Omega give the “real you” the mil­lion.

3) Omega “scans your brain and pre­dicts your de­ci­sion” with­out simu­lat­ing you: calcu­lates the FFT of your brain­waves or what­ever. In this case you can in­tend to build an iden­ti­cal scan­ner, use it on your­self to de­ter­mine what Omega pre­dicted, and then do what you please. Hilar­ity en­sues.

(NB: if Omega pro­hibits agents from us­ing me­chan­i­cal aids for self-in­tro­spec­tion, this is in effect a re­stric­tion on how ra­tio­nal you’re al­lowed to be. If so, all bets are off—this wasn’t the deal.)

(Another NB: this case is dis­tinct from 2 be­cause it re­quires Omega, and thus your own scan­ner too, to ter­mi­nate with­out simu­lat­ing ev­ery­thing. A simu­la­tor Omega would go into in­finite re­cur­sion if treated like this.)

4) Same as 3, but the uni­verse only has room for one Omega, e.g. the God Almighty. Then ipso facto it can­not ever be mod­el­led math­e­mat­i­cally, and let’s talk no more.

I guess this one is set­tled, folks. Any ques­tions?