Co-operative Utilitarianism

Donald Regan’s masterful (1980) Utilitarianism and Co-operation raises a problem for traditional moral theories, which conceive of agents as choosing between external options like ‘push’ or ‘not-push’ (options that are specifiable independently of the motive from which they are performed). He proves that no such traditional theory T is adaptable, in the sense that “the agents who satisfy T, whoever and however numerous they may be, are guaranteed to produce the best consequences possible [from among their options] as a group, given the behaviour of everyone else.” (p.6) It’s easy to see that various forms of rule or collective consequentialism fail when you’re the only agent satisfying the theory—doing what would be best if everyone played their part is not necessarily to do what’s actually best. What’s more interesting is that even Act Utilitarianism can fail to beat co-ordination problems like the following:

Poof: push Not-push
Whiff: push 10 0
Not-push 0 6

Here the best result is obviously for Whiff and Poof to both push. But this isn’t guaranteed by the mere fact that each agent does as AU says they ought. Why not? Well, what each ought to do depends on what the other does. If Poof doesn’t push then neither should Whiff (that way he can at least secure 6 utils, which is better than 0). And vice versa. So, if Whiff and Poof both happen to not-push, then both have satisfied AU. Each, considered individually, has picked the best option available. But clearly this is insufficient: the two of them together have fallen into a bad equilibrium, and hence not done as well as they (collectively) could have.

Regan’s solution is build a certain decision-procedure into the objective requirements of the theory:

The basic idea is that each agent should proceed in two steps: First he should identify the other agents who are willing and able to co-operate in the production of the best possible consequences. Then he should do his part in the best plan of behaviour for the group consisting of himself and the others so identified, in view of the behaviour of non-members of the group. (p.x)

This theory, which Regan calls ‘Co-operative Utilitarianism’, secures the property of adaptability. (You can read Regan for the technical details; here I’m simply aiming to convey the rough idea.) To illustrate with our previous example: suppose Poof is a non-cooperator, and so decides on outside grounds to not-push. Then Whiff should (i) determine that Poof is not available to cooperate, and hence (ii) make the best of a bad situation by likewise not-pushing. In this case, only Whiff satisfies CU, and hence the agents who satisfy the theory (namely, Whiff alone) collectively achieve the best results available to them in the circumstances.

If both agents satisfied the theory, then they would first recognize the other as a cooperator, and then each would push, as that is what is required for them to “do their part” to achieve the best outcome available to the actual cooperators.

* * *

[Originally posted to Philosophy, etc. Reproduced here as an experiment of sorts: despite discussing philosophical topics, LW doesn’t tend to engage much with the extant philosophical literature, which seems like a lost opportunity. I chose this piece because of the possible connections between Regan’s view of cooperative games and the dominant LW view of competitive games: that one should be disposed to co-operate if and only if dealing with another co-operator. In any case, I’ll be interested to see whether others find this at all helpful or interesting—naturally that’ll influence whether I attempt this sort of thing again.]