Newcomb’s paradox complete solution.

Here is the complete solution to Newcomb’s paradox.

Setup:

Omega performs a brainscan on you at 12pm, and expects you to choose a box at 6pm. Based on the brainscan, Omega makes a prediction and classifies you into two categories, either you will take only 1 box, or you will take 2 boxes. The AI is very good at classifying brainscans.

Box A is transparent and contains £1k

Box B has £1m if Omega thinks you are a 1-box person, or contains £0 if you are a 2-box person.

Do you choose to take both boxes, or only box B?

Note: I mention the brainscan to make things more concrete, the exact method of how Omega performs the prediction is not important.

Case 1: You have no free will.

Then the question of which box you should take is moot since, without free will, you cannot make any such decision. The exact amount you win is out of your control.

The decision tree looks like this:

Case 2: You have some free will.

Case 2.1: You can change your brain state such that Omega will classify you as a 1-box or 2-box person, but are incapable of acting against the beliefs derived from your brainstate.

In this case, the optimal decision it to choose to believe in the 1-box strategy, this alters your brain state so that you will be classified as a 1-box person. Doing this, you will gain £1m. You would be incapable of believing in the 1-box strategy and then deciding to choose 2 boxes.

In this case the decision tree looks like:

This makes it clear that choosing the 1-box brainstate is better.

Case 2.2: You cannot change your brain state, and are incapable of acting against you beliefs.

For intents and purposes, this is the same as case 1, you might have some free will, but not over the decisions that matter in this scenarion.

Refer back to the decision trees in Case 1.

Case 2.3: You cannot change your brain state, but are capable of choosing to take either 1 or 2 boxes.

Case 2.3.1: Omega classifies you as a 1-box person.

In this case, you should take two boxes. According to the initial premise, this case should be extremely unlikely as Omega will have classified you incorrectly. But most formulations mention that it is possible for Omega to be wrong, so I will leave this in as this is a complete solution.

In this case you should take the 2-box path.

Case 2.3.1: Omega classifies you as a 2-box person.

The decision tree looks like this:

In this case you should take the 2-box path.

Back to discussion of Case 2.3:

In this case, we will not know if Omega classified us as a 2-box or 1-box person. However, fortunately for us, the strategy for both 2.3.1 and 2.3.2 is the same, take 2 boxes.

Therefore the strategy for this case is: Take 2 boxes.

Case 2.4: You can change your brain state such that Omega will classify you as a 1-box or 2-box person, and are capable of acting against the beliefs derived from your brainstate.

This this is the case with the most freewill allowed to the person in this scenario. Note that this also goes against the premise somewhat since it stipulates that you have the capability of fooling Omega. But again, I leave it here for completeness.

In this case, the decision tree is the most complicated:

In this case, it is pretty clear that the optimal solution is: (1-box brainstate) → (take 2 boxes)

Conclusion:

Here is a summary of all the different strategies depending on which universe you live in:

Case 1: No decision. Prize is outside of you control.

Case 2.1: Choose 1-box brainstate. Leads to taking 1-box and guaranteed £1m.

Case 2.2: No decision. Prize is outside of you control.

Case 2.3: Choose to take 2 boxes. Guaranteed £1k, £1m is outside of your control.

Case 2.4: Choose (1-box brainstate) → (take 2 boxes). Guaranteed £1k + £1m.

In short, choose the 1-box brainstate if you can, and choose 2 boxes if you can.

Discussion of solution:

It is worth noting that which case you find yourself in is out of your hands, but rather this relies on the capabilities of human intelligence and free will. In particular, if you can willingly alter your brainstate or not is more a matter of psychology than decision theory.

In my opinion, given the wording of the problem, we are probably in universe Case 2.1, which is essentially a physicalist position, because I believe our actions are directly caused by our brainstate. This rules out cases 2.3 and 2.4, which both have the person taking actions which are independant of their physical brainstate. I could see a dualist believing that 2.3 or 2.4 are possible. I also believe in free will, which rule out 2.2 and case 1. Essentially, it is impossible for humans to alter their brainstate without truly beliving it, at which point they will be forced to take only 1 box.

I also think this problem is confusing to a lot of people, because it is not immediately clear that there are actually 2 decisions being made: choosing your brainstate, and choosing the number of boxes to take. These two become muddled together, and seem to cause a paradox, since we want a 1-box brainstate, but also want to take 2 boxes. TDT believers only see the first decision, CDT believers only see the second decision. The key here is realising there are two decisions. But if humans are actually capable of making those decisions or not is up for debate, but also not decision theory.