Why is this surprising? You’re basically assuming that there is no correlation between what program Omega predicts and what program you actually are. That is, Omega is no predictor at all! Thus, obviously you two-box, because one-boxing would have no effect on what Omega predicts. (Or maybe the right way to think about this is: it will have a tiny but non-zero effect, because you are one of the |P| programs, but since |P| is huge, that is ~0.)
When instead you condition on a = b, this becomes a different problem: Omega is now a perfect predictor! So you obviously one-box.
Another way to frame this is whether you optimize through the inner program or assume it fixed. From your prior, Omega samples randomly, so you obviously don’t want to optimize through it. Not only because |P| is huge, but actually more importantly, because for any policy you might want to implement, there exists a policy that implements the exact opposite (and might be sampled just as likely as you), thus your effect nets out to exactly 0. But once you update on (condition on) seeing that actually Omega sampled exactly you (instead of your random twin, or any other old program), then you’d want to optimize through a! But, you can’t have your cake and eat it too. You cannot shape yourself to reap that reward (in the world where Omega samples exactly you), without also shaping yourself to give up some reward (relative to other players) in the world where Omega samples your evil twin. (Thus you might want to say: Aha! I will make my program behave differently depending on whether it is facing itself or an evil twin! But then… I can also create an evil twin relative to that more complex program. And we keep on like this forever. That is, you cannot actually, fully, ever condition your evil twin away, because you are embedded in Omega’s distribution. I think this is structurally equivalent to some commitment race dynamics I discussed with James, but I won’t get into that here.)
Secretly, I think this duality only feels counter-intuitive because it’s an instance of dynamic inconsistency, = you want to take different actions once you update on information, = the globally (from the uncorrelated prior) optimal action is not always the same as the locally optimal action (from a particular posterior, like the one assuming correlation). Relatedly, I think the only reason your Universal framing differs from your Functional and Anthropic framings is exactly that they are (implicitly) using these two different distributions (one without correlation, the other with):
The Universal framing assumes that Omega samples randomly (no correlation).
The Functional framing assumes that “you have control over the inner program”. But this is sneaking something in. You obviously have control over your own program. But you don’t actually have any control over “the program that Omega will randomly sample from P” (because anything you do is cancelled by your evil twin). Thus, assuming you have control over the inner program, is equivalent to assuming that Omega sampled you. That is, yes correlation.
The Anthropic framing is also implicitly assuming “Omega samples you”, although it’s a bit more complicated since it also depends on your utility function (how much you care about different programs getting different amounts of money):
If your distribution is truly “Omega samples programs at random”, then when you observe two equal numbers, you are almost certain to be in a simulation. Given that, if you for example care equally about all programs getting as much money as possible, then of course you should one-box. That will entail that each random program (with a tiny probability) gets a million, which is a huge win. But the intuition that you were expressing in Question 2 (“p2 is better than p1 because it scores better”) isn’t compatible with “caring equally about all programs”. Instead, it sounds as if you positively want to score better than other programs, that is, maximize your score and minimize theirs! If that’s the case, then you obviously should two-box, since almost certainly you are subtracting a million from another program, not yourself. Even assuming, for simplicity, that you only care about beating one other program (as in the p1 and p2 example), you should two-box, because you are subtracting the same million dollars from both, but you are gaining a very slight edge with the thousand dollars.
If your distribution is instead “a = b” (assumes correlation), then, regardless of whether you want to maximize everyone’s payoff or you want to beat another program that could be sampled, you want to one-box, since the million dollar benefit is coming straight to you, and is bigger than the thousand dollar benefit.
(Or maybe the right way to think about this is: it will have a tiny but non-zero effect, because you are one of the |P| programs, but since |P| is huge, that is ~0.)
No effect. I meant that programmer has to write b from P, not that b is added to P. Probably I should change the phrasing to make it clearer.
But the intuition that you were expressing in Question 2 (“p2 is better than p1 because it scores better”) isn’t compatible with “caring equally about all programs”. Instead, it sounds as if you positively want to score better than other programs, that is, maximize your score and minimize theirs!
No, the utility here is just the amount of money b gets, whatever program it is.a doesn’t get any money, it just determines what will be in the first box.
No, the utility here is just the amount of money b gets
I meant that it sounded like you “wanted a better average score (over as) when you are randomly sampled as b than other programs”. Although again I think the intuition-pumping is misleading here because the programmer is choosing which b to fix, but not which a to fix. So whether you wanna one-box only depends on whether you condition on a = b.
(Just skimmed, also congrats on the work)
Why is this surprising? You’re basically assuming that there is no correlation between what program Omega predicts and what program you actually are. That is, Omega is no predictor at all! Thus, obviously you two-box, because one-boxing would have no effect on what Omega predicts. (Or maybe the right way to think about this is: it will have a tiny but non-zero effect, because you are one of the |P| programs, but since |P| is huge, that is ~0.)
When instead you condition on a = b, this becomes a different problem: Omega is now a perfect predictor! So you obviously one-box.
Another way to frame this is whether you optimize through the inner program or assume it fixed. From your prior, Omega samples randomly, so you obviously don’t want to optimize through it. Not only because |P| is huge, but actually more importantly, because for any policy you might want to implement, there exists a policy that implements the exact opposite (and might be sampled just as likely as you), thus your effect nets out to exactly 0. But once you update on (condition on) seeing that actually Omega sampled exactly you (instead of your random twin, or any other old program), then you’d want to optimize through a! But, you can’t have your cake and eat it too. You cannot shape yourself to reap that reward (in the world where Omega samples exactly you), without also shaping yourself to give up some reward (relative to other players) in the world where Omega samples your evil twin.
(Thus you might want to say: Aha! I will make my program behave differently depending on whether it is facing itself or an evil twin! But then… I can also create an evil twin relative to that more complex program. And we keep on like this forever. That is, you cannot actually, fully, ever condition your evil twin away, because you are embedded in Omega’s distribution. I think this is structurally equivalent to some commitment race dynamics I discussed with James, but I won’t get into that here.)
Secretly, I think this duality only feels counter-intuitive because it’s an instance of dynamic inconsistency, = you want to take different actions once you update on information, = the globally (from the uncorrelated prior) optimal action is not always the same as the locally optimal action (from a particular posterior, like the one assuming correlation). Relatedly, I think the only reason your Universal framing differs from your Functional and Anthropic framings is exactly that they are (implicitly) using these two different distributions (one without correlation, the other with):
The Universal framing assumes that Omega samples randomly (no correlation).
The Functional framing assumes that “you have control over the inner program”. But this is sneaking something in. You obviously have control over your own program. But you don’t actually have any control over “the program that Omega will randomly sample from P” (because anything you do is cancelled by your evil twin). Thus, assuming you have control over the inner program, is equivalent to assuming that Omega sampled you. That is, yes correlation.
The Anthropic framing is also implicitly assuming “Omega samples you”, although it’s a bit more complicated since it also depends on your utility function (how much you care about different programs getting different amounts of money):
If your distribution is truly “Omega samples programs at random”, then when you observe two equal numbers, you are almost certain to be in a simulation. Given that, if you for example care equally about all programs getting as much money as possible, then of course you should one-box. That will entail that each random program (with a tiny probability) gets a million, which is a huge win. But the intuition that you were expressing in Question 2 (“p2 is better than p1 because it scores better”) isn’t compatible with “caring equally about all programs”. Instead, it sounds as if you positively want to score better than other programs, that is, maximize your score and minimize theirs! If that’s the case, then you obviously should two-box, since almost certainly you are subtracting a million from another program, not yourself. Even assuming, for simplicity, that you only care about beating one other program (as in the p1 and p2 example), you should two-box, because you are subtracting the same million dollars from both, but you are gaining a very slight edge with the thousand dollars.
If your distribution is instead “a = b” (assumes correlation), then, regardless of whether you want to maximize everyone’s payoff or you want to beat another program that could be sampled, you want to one-box, since the million dollar benefit is coming straight to you, and is bigger than the thousand dollar benefit.
No effect. I meant that programmer has to write b from P, not that b is added to P. Probably I should change the phrasing to make it clearer.
No, the utility here is just the amount of money b gets, whatever program it is.a doesn’t get any money, it just determines what will be in the first box.
I meant that it sounded like you “wanted a better average score (over as) when you are randomly sampled as b than other programs”. Although again I think the intuition-pumping is misleading here because the programmer is choosing which b to fix, but not which a to fix. So whether you wanna one-box only depends on whether you condition on a = b.