You didn’t mention in the Newcomb’s Problem article that you’re a one-boxer.
As a die-hard two-boxer, perhaps someone can explain one-boxing to me. Let’s say that Box A contains money to save 3 lives (if Omega thinks you’ll take it only) or nothing, and Box B contains money to save 2 lives. Conditional on this being the only game Omega will ever play with you, why the hell would you take Box A only?
I suspect what all you one-boxers are doing is that you somehow believe that a scenario like this one will actually occur, and you’re trying to broadcast your intent to one-box so Omega will put money in for you.
Imagine Omega’s predictions have a 99.9% success rate, and then work out the expected gain for one-boxers vs two-boxers.
By stepping back from the issue and ignoring the ‘can’t change the contents now’ issue, you can see that one-boxers do much better than two-boxers, so as we want to maximise our expected payoff, we should become one-boxers.
I posted that comment four or five months ago. I’m a one-boxer now, haha. Figure that you can either choose to always one-box or choose to pretend like you’re going to one-box but actually two-box. Omega is assumed to be able to tell the difference, so the first option makes more sense.
I can choose through the composition of my mind to save 3 lives by wanting to refuse to take the money to save 2 lives. Or I can choose to save the two lives and thus not get 3 lives. Why the hell would I take both boxes?
You didn’t mention in the Newcomb’s Problem article that you’re a one-boxer.
As a die-hard two-boxer, perhaps someone can explain one-boxing to me. Let’s say that Box A contains money to save 3 lives (if Omega thinks you’ll take it only) or nothing, and Box B contains money to save 2 lives. Conditional on this being the only game Omega will ever play with you, why the hell would you take Box A only?
I suspect what all you one-boxers are doing is that you somehow believe that a scenario like this one will actually occur, and you’re trying to broadcast your intent to one-box so Omega will put money in for you.
Imagine Omega’s predictions have a 99.9% success rate, and then work out the expected gain for one-boxers vs two-boxers.
By stepping back from the issue and ignoring the ‘can’t change the contents now’ issue, you can see that one-boxers do much better than two-boxers, so as we want to maximise our expected payoff, we should become one-boxers.
Not sure if I find this convincing.
I posted that comment four or five months ago. I’m a one-boxer now, haha. Figure that you can either choose to always one-box or choose to pretend like you’re going to one-box but actually two-box. Omega is assumed to be able to tell the difference, so the first option makes more sense.
I can choose through the composition of my mind to save 3 lives by wanting to refuse to take the money to save 2 lives. Or I can choose to save the two lives and thus not get 3 lives. Why the hell would I take both boxes?
I guess that makes sense. If you have the option of choosing what the composition of your mind is.
“Composition of my mind” is a bad phrase for it, but what I mean is that I have a collection of neurons that say “I’m a one-boxer” or similar.
You can find several long discussions of this on Overcoming Bias, and in earlier posts on Less Wrong.