Red Pill vs Blue Pill, Bayes style

It’s been going around twitter. I answered without thinking. Then I thought about it for a while and decided both answers should be fine, then I thought some more and decided everything I first thought was probably wrong. So now it’s time to try math. Ege Erdil did a solid calculation on LW already, and another one is up on twitter. But neither of them accounts for the logical correlation between your decision and the unknown decisions of the other players. So let’s try Bayes’ theorem!

(If you just want the answer, skip to the end.)

Epistemic status: there are probably some mistakes in here, and even if there aren’t, I still spent like 8 hours of my life on a twitter poll.

We’ll begin with the state of least possible knowledge: a Jeffreys non-informative beta prior for the probability that a randomly selected player will choose blue. That is,



Of course if you think you understand people you could come up with a better prior. (Also if the blue pill looks like a scary people blender, that should probably change your prior.) I saw the poll results but I’m pretending I didn’t, so I’ll stick with Jeffreys.

Now, you yourself are one of the players in this group, so you need to update based on your choice. That’s right, we’re using acausal logic here. Your decision is correlated with other people’s decisions. How do you update when you haven’t decided yet? Luckily there are only two choices, so we’ll brute force it, try both and see which turns out better in expectation.

To do a Bayesian update of a beta prior based on 1 data point, we just add 1 to one of the parameters. (If you think you’re a special snowflake/​alien and other people are not like you… then maybe update your side by less than 1.)
So if you choose blue, your posterior over is now :


Whereas if you choose red, your posterior is :

Take to be the number of participants in this pill game, besides you. I will assume is large, so the number who choose blue is approximately . I will also assume is even, so the total number of players is odd, to avoid having to guess what happens in a tie.

Now, consider your utility function. We can start by assuming it’s linear[1], and that you value everyone else’s life equally, but at a multiple of your own life (following Ege Erdil’s nomenclature for consistency). Then, scaling so that the value of your own life is 1, we can define your utility prior to playing:

You might also assign different values to red-choosers and blue-choosers (one commenter I saw said they wouldn’t want to live in a world populated only by people who picked red) but I’m going to ignore that complication for now.

Now, if (total selfishness) then you should choose red. But I think most people would give their lives heroically to save a crowd of a hundred strangers. Or at least I’d like to think that. So for the sake of illustration, let’s imagine .

Your final utility will depend on (how likely others are to choose blue), and on your choice (let’s say if you choose blue and if you choose red).

In the tiebreaker case, you will live regardless of your choice, but choosing red will kill half of the other players. The probability that you have the tie-breaking vote is the PDF of the binomial function,

But this still has in it, so we’ll have to take the expectation over . That led me into a mess of approximating large factorials, but code interpreter eventually coached me through it. Here’s for N from 10 to 10 billion. It can be approximated very well by .

For the non-tiebreaker cases, you have to evaluate and , again as an expectation over . This is where I’ll use the approximation that . Then the expected probability that is just the CDF of the beta distribution up to 0.5. This approximation is valid for values of above a couple hundred:

That means that for our posterior from above, if you choose blue, the probability of the majority choosing red is about 0.1817. If you choose red, it’s the opposite: the probability that the majority will choose red is 1-0.1817 = 0.8183, over 4 times higher.

The last thing we need to calculate is the number of players we should expect to die, in the cases where the majority chooses red. This consideration will push in the opposite direction. If you chose blue, it’s more likely that the majority also chose blue—but if blue falls short of a majority, it will probably be a little short. That means more deaths.

Specifically, we want the expectation of over the interval (0, 0.5). That’s shown by the dotted lines in the plots below. You can roughly see that each of them divides the green area in half. For the case where you chose red, the expected fraction of players who die (conditional on a red majority) is 0.1527. If you chose blue, it’s 0.3120, or about twice as many.

Dotted line shows the expected fraction of players who die (choose blue) if the majority chooses red. (a) what you should expect if you chose red, (b) what you should expect if you chose blue

Now we can go back and calculate the expected utility function for each case:

Substituting the approximation for and simplifying,

Expected utility if you chose red or blue, as a function of the number of players. The number of players needed to make choosing blue sensible depends on your altruism , the value you put on a stranger relative to yourself.

The constant term here is the probability (and hence expected utility) of your own survival.

The second term, proportional to the value that you put on another person’s life relative to your own, comes from the possibility that you might have the tiebreaking vote: here cancels out because as the probability decreases the number of people you could save increases, so choosing blue causally saves about 13 of a person on average (0.6275-0.3137). (The final term also relates to the tiebreaker, but it will drop out as becomes large, and it was the same in both cases anyway.)

The first term, proportional to , represents the potential to benefit others via the acausal relationship, and obviously it’s greater if you choose blue. As long as , this term will eventually come to dominate over the other factors as the number of players increases, which means anyone who cares (linearly) about other people should choose blue for large values of .

In conclusion:

You’ll want to choose blue if the number of players is at least 2-3 times the minimum number you would give your life to save.

  • For ν=0.1: Choose blue if N > 22.01

  • For ν=0.01: Choose blue if N > 261.44

  • For ν=0.001: Choose blue if N > 2655.73

I found this result very counterintuitive. And it also makes me a jerk, because I originally chose red. But let me try to stan blue a little. I think the argument goes something like: Red can help you, personally, survive. But in a society that tends towards blue, more people survive on average. If enough people’s lives are at risk, then making society a little more blue will outweigh the risk to your own life.

To be honest, I don’t know if I’d have the guts to choose an 18% chance of death on the basis of spooky correlations.

  1. ^

    What if your utility is logarithmic in the number of survivors? I think it won’t make much difference, because then you also have to take into account everyone in the world who’s not playing as well. If is less than about a billion, your marginal utility over those people will still be roughly linear. If you would choose blue under that linear marginal utility for billion, then increasing should not change your mind. If you would choose red for 1 billion, it’s because your value for altruism is very close to zero, and you were probably going to choose red no matter what.