I expect that other voters correlate with my choice, and so I am not just deciding 1 vote, but actually a significant fraction of votes.
If the number of uncorrelated blue voters, plus the number of people who vote identical to me exceeds 50%, then I can save the uncorrelated blue voters.
More formally: let R, B, C denote the fraction of uncorrelated red, uncorrelated blue and correlated voters that will vote the same as you do. Let S be how large a fraction of people you’d let die in order to save yourself (i.e. some measure of selfishness).
Then choosing blue over red gives you extra utility/lives saved depending on what R,B,C,S are.
If B>0.5 then the utility difference is 0.
If B<0.5 and B+C>0.5 then the difference is +B.
If B+C<0.5 then the difference is -(C+S).
By taking the expectation over your uncertainties about what B,R,C might be, for example by averaging across some randomly chosen scenarios that seem like they properly cover your uncertainty, you get the difference in expected utility between voting blue and red.
Estimating C,R,B can be done by guessing which algorithms other voters use to decide their votes, and how much those algorithms equal your own. Getting good precision on the latter part probably involves also guessing the epistemic state of other voters, i.e. their guesses for C,R,B, and doing some more complicated game theory and solving for equilibria.
If people vote as if their individual vote determines the vote of a non-negligible fraction of the voter pool, then you only need ν=O(1/N) (averaged over the whole population, so the value of the entire population is νN=O(1) instead of ν=O(1), which seems much more realistic.
So voting blue can make sense for a sufficiently large coalition of “ordinary altruists” with ν≫1/N who are able to pre-commit to their vote and think people outside the coalition might vote blue by mistake etc. rather than the “extraordinary altruists” we need in the original situation with ν=O(1). Ditto if you’re using a decision theory where it makes sense to suppose such a commitment already exists when making your decision.
I expect that other voters correlate with my choice, and so I am not just deciding 1 vote, but actually a significant fraction of votes.
If the number of uncorrelated blue voters, plus the number of people who vote identical to me exceeds 50%, then I can save the uncorrelated blue voters.
More formally: let R, B, C denote the fraction of uncorrelated red, uncorrelated blue and correlated voters that will vote the same as you do. Let S be how large a fraction of people you’d let die in order to save yourself (i.e. some measure of selfishness).
Then choosing blue over red gives you extra utility/lives saved depending on what R,B,C,S are.
If B>0.5 then the utility difference is 0.
If B<0.5 and B+C>0.5 then the difference is +B.
If B+C<0.5 then the difference is -(C+S).
By taking the expectation over your uncertainties about what B,R,C might be, for example by averaging across some randomly chosen scenarios that seem like they properly cover your uncertainty, you get the difference in expected utility between voting blue and red.
Estimating C,R,B can be done by guessing which algorithms other voters use to decide their votes, and how much those algorithms equal your own. Getting good precision on the latter part probably involves also guessing the epistemic state of other voters, i.e. their guesses for C,R,B, and doing some more complicated game theory and solving for equilibria.
If people vote as if their individual vote determines the vote of a non-negligible fraction of the voter pool, then you only need ν=O(1/N) (averaged over the whole population, so the value of the entire population is νN=O(1) instead of ν=O(1), which seems much more realistic.
So voting blue can make sense for a sufficiently large coalition of “ordinary altruists” with ν≫1/N who are able to pre-commit to their vote and think people outside the coalition might vote blue by mistake etc. rather than the “extraordinary altruists” we need in the original situation with ν=O(1). Ditto if you’re using a decision theory where it makes sense to suppose such a commitment already exists when making your decision.