Another question about utilitarianism and selfishness

Thought of this after reading the discussion following abcd_z’s post on utilitarianism, but it seemed sufficiently different that I figured I’d post it as a separate topic. It feels like the sort of thing that must have been discussed on this site before, but I haven’t seen anything like it (I don’t really follow the ethical philosophy discussions here), so pointers to relevant discussion would be appreciated.

Let’s say I start off with some arbitrary utility function and I have the ability to arbitrarily modify my own utility function. I then become convinced of the truth of preference utilitarianism. Now, presumably my new moral theory prescribes certain terminal values that differ from the ones I currently hold. To be specific, my moral theory tells me to construct a new utility function using some sort of aggregating procedure that takes as input the current utility functions of all moral agents (including my own). This is just a way of capturing the notion that if preference utilitarianism is true, then my behavior shouldn’t be directed towards the fulfilment of my own (prior) goals, but towards the maximization of preference satisfaction. Effectively, I should self-modify to have new goals.

But once I’ve done this, my own utility function has changed, so as a good preference utilitarian, I should run the entire process over again, this time using my new utility function as one of the inputs. And then again, and again… Let’s look at a toy model. In this universe, there are two people: me (a preference utilitarian) and Alice (not a preference utilitarian). Let’s suppose Alice does not alter her utility function in response to changes in mine. There are two exclusive states of affairs that can be brought about in this universe: A and B. Alice assigns a utility of 10 to A and 5 to B, I initially assign a utility of 3 to A and 6 to B. Assuming the correct way to aggregate utility is by averaging, I should modify my utilities to 6.5 for A and 5.5 for B. Once I have done this, I should again modify to 8.25 for A and 5.25 for B. Evidently, my utility function will converge towards Alice’s.

I haven’t thought about this at all, but I think the same convergence will occur if we add more utilitarians to the universe. If we add more Alice-type non-utilitarians there is no guarantee of convergence. So anyway, this seems to me a pretty strong argument against utilitarianism. If we have a society of perfect utilitarians, a single defector who refuses to change her utility function in response to changes in others’ can essentially bend the society to her will, forcing (through the power of moral obligation!) everybody else to modify their utility functions to match hers, no matter what her preferences actually are. Even if there are no defectors, all the utilitarians will self-modify until they arrive at some bland (value judgment alert) middle ground.

Now that I think about it, I suspect this is basically just a half-baked corollary to Bernard Williams’ famous objection to utilitarianism:

The point is that [the agent] is identified with his actions as flowing from projects or attitudes which… he takes seriously at the deepest level, as what his life is about… It is absurd to demand of such a man, when the sums come in from the utility network which the projects of others have in part determined, that he should just step aside from his own project and decision and acknowledge the decision which utilitarian calculation requires. It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his projects and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.

Anyway, I’m sure ideas of this sort have been developed much more carefully and seriously by philosophers, or even other posters here at LW. As I said, any references would be greatly appreciated.