Utilitarians don’t have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don’t come from evaluating any of the infinite variety of ‘true’ utility functions of every individual. They come from evaluating the total utilitarian’s model of individual preference satisfaction (or happiness or whatever).
Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn’t really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual’s utility function then that is silly but it’s a problem of confused terminology and not really a strong argument against utilitarianism.
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.
You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual’s internal state.
Utilitarians don’t have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don’t come from evaluating any of the infinite variety of ‘true’ utility functions of every individual. They come from evaluating the total utilitarian’s model of individual preference satisfaction (or happiness or whatever).
Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn’t really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual’s utility function then that is silly but it’s a problem of confused terminology and not really a strong argument against utilitarianism.
But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into “don’t do non-normative things even if you got that answer from a shut-up-and-multiply”?
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.
You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual’s internal state.