That’s why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil’s must bear a special relationship to the current Phil, which is not shared by all other future people—or else you lose egoism altogether when speaking about the future.
There are certainly some well thought out arguments that when thinking about your possible future, you’re thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has any rational claim on what you ought to do. The Ramsey/Von Neumann-Morgenstern framework tacitly requires that when a person acts so has to maximize his expected utility, he does so with the assumption that he his actually maximizing HIS expected utility, not someone else’s.
This framework only makes sense if each possible person over which the utility function is defined is the agents future self, not another agent altogether. There needs to be some logical or physical relationship between the current agent and the class of future possible agent’s such that their self/identity is maintained.
The less clear that the identity is maintained, the less clear that there is a rational maxim that the agent should maximize the future agent’s utility...which among other things, is a philosopher’s explanation for why we discount future value when performing actions, beyond what you get from simple time value of money.
So you still have the problem that the utility is, for instance, defined over all possible future Phil’s utilities, not over all possible future people’s. Possible Phil’s are among the class of possible people (i presume), but not vice versa. So there is no logical relationship that a process that holds for possible phil’s holds for possible future people.
This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.
PhilGoetz’s argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.
Egoism: X ought to do what maximizes X’s happiness.
Utilitarianism: X ought to do what maximizes EVERYONE’s happiness
(or put Xo for X. and X_sub_x for Everyone).
X’s happiness is not logically equivalent to Everyone’s happiness. The important takeway here is that because happiness is indexed to an individual person (at least as defined in the egoistic principle), each person’s happiness is an independent logical term.
We have to broaden the scope of egoism slightly to include whatever concept of the utility function you use, and the discussion of possible selves. However, unless you have a pretty weird concept of self/identity, I don’t see why it wouldn’t work. In that situation, X’s future self in all possible worlds bears a relationship to X at time 0, such that future X’s happiness is independent of future Everyone’s happiness.
Anyway, using Von-Neumann Morgenstern doesn’t work here. There is no logical reason to believe that averaging possible states with regard to an individual’s utility has any implications for averaging happiness over many different individuals.
As addendum, neither average nor total utility provides a solution to the fairness, or justice, issue (i.e. how utility is distributed among people, which at least has some common sense gravity to it). Individual utility maximization more or less does not have to deal with that issue at all (their might be some issues with time-ordering of preferences, etc., but that’s not close to the same thing). That’s another sign Von-Neumann Morgenstern just doesn’t give an answer as to which ethical system is more rational.