Formalizing Newcomb’s

This post was inspired by taw urging us to mathematize Newcomb’s problem and Eliezer telling me to post stuff I like instead of complaining.

To make Newcomb’s problem more concrete we need a workable model of Omega. Let me count the ways:

1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it’s logical to one-box.

2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it’s being run inside Omega or in the real world, and it’s logical to one-box thus making Omega give the “real you” the million.

3) Omega “scans your brain and predicts your decision” without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.

(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you’re allowed to be. If so, all bets are off—this wasn’t the deal.)

(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)

4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let’s talk no more.

I guess this one is settled, folks. Any questions?