Subjective Altruism

Let us assume for the purpose of this argument that Bayesian probabilities are subjective. Specifically, I am thinking in terms of the model of probability expressed in model 4 here. That is to say that the meaning of P(A)=2/​3 is “I as a decision agent care twice as much about possible world in which A is true relative to the possible world in which A is false.” Let us also assume that it is possible to have preferences about things that we will never observe.

Consider the following thought experiment:

Alice and Bob are agents who disagree about the fairness of a coin. Alice believes that the coin will come up heads with probability 23 and Bob believes the coin will come up tails with probability 23. They discuss their reasons for a long time and realize that their disagreement comes from different initial prior assumptions, and they agree that both people have rational probabilities given their respective priors. Alice is given the opportunity to gamble on behalf of Bob. Alice must call heads or tails, then the coin will be flipped once. If Alice calls the coin correctly, then Bob will be given a dollar. If she calls the coin incorrectly, then nothing happens. Either way, nobody sees the result of the coin flip, and Alice and Bob never interact again. Should Alice call heads or tails?

The meat of this question is this: When trying to being altruistic towards a person, should you maximize their expected utility under their priors or your own. I will present an argument here, but feel free to stop reading here and think about it on your own and post the results.

First of all, notice there there are actually 3 options:

1) Maximize your own expectation of Bob’s utility

2) Maximize Bob’s expectation of his utility

3) Maximize what Bob’s expectation of his utility would be if he were to update on the evidence of everything that you have observed.

At first it may have looked like the main options were 1 and 2, but I claim that 2 is actually a very bad option and the only question is between options 1 and 3. Option 2 is stupid because for example it would cause Alice to call tails even if she has already seen the coin flip and it came up heads. There is no reason for Alice not to update on all of the information she has. The only question is whose prior should she update from. In this specific thought experiment, we are assuming that 2 and 3 are the same, since Alice has already convinced herself that her observations could not change Bob’s mind, but I think that as a general options 1 and 3 are somewhat reasonable answers, while 2 is not.

Option 3 has the nice property that it does not have to observe Bob’s utility function. It only has to observe Bob’s expected utility of different choices. This is nice because in many ways “expected utility” seems like a more fundamental and possibly more well defined concept than “utility.” We are trying to be altruistic towards Bob. It seems natural to give Bob the most utility in the possible worlds that he “cares about” the most.

On the other hand, we want the possible worlds that we care about most to be as good as possible. We may not ever be able to observe whether of not Bob gets the dollar, but it is not just Bob who wants to get the dollar. We also want Bob to get the dollar. We want Bob to get the dollar in the most important possible worlds, the worlds we assign a high probability to. What we want is for Bob to be happy in the worlds that are important. We may have subjectively assigned those possible worlds to be the most important ones, but from the standpoint of us as a decision agent, the worlds we assign high probability to really are more important than the other ones.

Option 1 is also simpler than option 3. We just have a variable for Bob’s utility in our utility function, and we do what maximizes our expected utility. If we took option 3, we would be maximize something that is not just a product of our utilities with our probabilities.

Option 3 has some unfortunate consequences. For example, it might cause us to pray for a religious person even if we are very strongly atheist.

I prefer option 1. I care about the worlds that are simple and therefore are given high probability. I want everyone to be happy in those worlds. I would not sacrifice the happiness in someone in a simple/​probable/​important world just because someone else thinks another world is important. Probability may be subjective, but relative to the probabilities that I use to make all my decisions, Bob’s probabilities are just wrong.

Option 3 is nice in situations where Alice and Bob will continue interacting, possibly even interacting through mutual simulation. If Alice and Bob were given a symmetric scenario, then this would become a prisoner dilemma, where Alice choosing heads corresponds to defecting, while Alice choosing tails corresponds to cooperating. However, I believe this is a separate issue.