Normally, you can assume your thought processes are uncorrelated with whats out there. Newcomb-like problems however, do have the state of the outside universe correlated with your actual thoughts, and this is what throws people off.
If you are unsure if the state of the universe is X or Y (say with p = 1⁄2 for simplicity), and we can chose either option A or B, we can calculate the expected utility of choosing A vs B by taking 1⁄2u(A,X)+1/2u(A,Y) and comparing it to 1⁄2u(B,X)+1/2u(B,Y).
In a newcomb-like problem, where the state of the experiment is actually dependent on your choice, the expected utility comparison should now be ~1u(A,X)+~0u(A,Y) vs ~0u(B,X)+~1u(B,Y).
In this case, it boils down to “Is u(A,X) > u(B,Y)?”.
It is not enough for Omega to have a decent record of getting it right, since you could probably do pretty well by reading peoples comments and guessing based on that.
If Omega made its prediction solely based on a comment you made on LessWrong, you should expect that if you choose A the universe will be in the same state as if you choose b- knowing your ultimate decision doesn’t tell you anything, since the only relevant evidence is what you said a month ago.
If, however, Omega actually simulates your thought process in sufficient detail to know for sure which choice you made, knowing that you ultimately decide to pick A is strong evidence that omega has set up X, and if you choose B, you better expect to see Y.
The reason that the answer changes is that the state of the box actually does depend on the thoughts themselves- it’s just that you thought the same thoughts when omega was simulating you before filling the boxes/flipping the coin.
If you aren’t sure whether you’re just Omega’s simulation, you better one box/pay omega. If we’re talking about a wannabe Omega that just makes decent predictions based off comments, then you defect (though if you actually expect a situation like this to come up, you argue that you won’t)
Omega’s actions depend only on your decision (action), or in this case counterfactual decision, not on your thoughts or the algorithm you use to reach the decision. The action of course depends on your thoughts, but that’s the usual case. You may move several steps back, seeking the ultimate cause, but that’s pretty futile.
Normally, you can assume your thought processes are uncorrelated with whats out there. Newcomb-like problems however, do have the state of the outside universe correlated with your actual thoughts, and this is what throws people off.
If you are unsure if the state of the universe is X or Y (say with p = 1⁄2 for simplicity), and we can chose either option A or B, we can calculate the expected utility of choosing A vs B by taking 1⁄2u(A,X)+1/2u(A,Y) and comparing it to 1⁄2u(B,X)+1/2u(B,Y).
In a newcomb-like problem, where the state of the experiment is actually dependent on your choice, the expected utility comparison should now be ~1u(A,X)+~0u(A,Y) vs ~0u(B,X)+~1u(B,Y).
In this case, it boils down to “Is u(A,X) > u(B,Y)?”.
It is not enough for Omega to have a decent record of getting it right, since you could probably do pretty well by reading peoples comments and guessing based on that.
If Omega made its prediction solely based on a comment you made on LessWrong, you should expect that if you choose A the universe will be in the same state as if you choose b- knowing your ultimate decision doesn’t tell you anything, since the only relevant evidence is what you said a month ago.
If, however, Omega actually simulates your thought process in sufficient detail to know for sure which choice you made, knowing that you ultimately decide to pick A is strong evidence that omega has set up X, and if you choose B, you better expect to see Y.
The reason that the answer changes is that the state of the box actually does depend on the thoughts themselves- it’s just that you thought the same thoughts when omega was simulating you before filling the boxes/flipping the coin.
If you aren’t sure whether you’re just Omega’s simulation, you better one box/pay omega. If we’re talking about a wannabe Omega that just makes decent predictions based off comments, then you defect (though if you actually expect a situation like this to come up, you argue that you won’t)
Omega’s actions depend only on your decision (action), or in this case counterfactual decision, not on your thoughts or the algorithm you use to reach the decision. The action of course depends on your thoughts, but that’s the usual case. You may move several steps back, seeking the ultimate cause, but that’s pretty futile.