Imagine Omega’s predictions have a 99.9% success rate, and then work out the expected gain for one-boxers vs two-boxers.
By stepping back from the issue and ignoring the ‘can’t change the contents now’ issue, you can see that one-boxers do much better than two-boxers, so as we want to maximise our expected payoff, we should become one-boxers.
Not sure if I find this convincing.
To the extent that we all have common values, rationality should correlate to achieving those values: so if niceness is a general value, a rationalist community should be nice (or gain enough of another value to make up for the loss).
If niceness is not a reasonably-universal value, empirically our understanding of niceness seems to correlate with rationality.