newcomb’s altruism

Hello lesswrong community. I want to play a game. *jigsaw music plays*

In box B, I have placed either $1,000,000 multiplied by the number of humans on earth or $0. In box A I placed $1000 multiplied by the number of humans on earth. Every human on the planet, including you, will now be asked the following question: do you take just box B, or both? If my friend omega predicted the majority of humans would one-box, box B has the aforementioned quantity of money, and it will be split accordingly (everyone in the world receives $1,000,000). If he predicted the majority of humans would two-box, it has nothing. Everyone who two-boxes receives $1000 in addition to whether or not the $1,000,000 was obtained. In fact forget the predicting, let’s just say I’ll tally up the votes and *then* decide whether to put the money in box B or not. Would it then be rational to two-box or one-box? If I told you that X is the proportion of humans that one-box in the classical newcomb’s problem, should that affect your strategy? What if I told you that Y is the proportion of humans so far that have one-boxed out of those who have chosen so far? Would it even be morally permissible to two-box? Also, let’s assume the number of humans are odd (since I know someone’s going to ask what happens in a tie).

I do also have a follow-up in both cases. If you chose two-box, let’s say I stopped your decision short to tell you that there are N other instances of yourself in the world, for I have cloned you secretly and without consent (sue me). How big would N have to be for you to one-box? If you chose one-box, and I stopped your decision short to say N people in the world have already two-boxed, or have already one-boxed, how big would N have to be for you to decide your effect is inconsequential and two-box?