If you can conclude that you should kill Grandma to make the world a better place, Grandma too can conclude that she should kill herself to make the world a better place—or better yet, just give the money to charity right now without killing anyone!
But she hasn’t done so! This at the very least indicates that she doesn’t think those actions would be better. So we have a conflict: agents disagree about which actions have greater utility.
Utilitarianism alone cannot resolve this. It is based on the fiction that there exists some universal utility function that all agents will agree on. It’s a useful fiction for thought experiments, but it is a terrible model for reality.
You can’t even just apply some aggregating function to utility function of individual agents, since utility functions are scale and translation invariant per agent, while aggregation functions are not. The best you can do is limit yourself to some Pareto frontier, since those are scale and translation invariant, but it hardly takes a shining moral principle to say “don’t do things that make everybody including yourself worse off”.
If you can conclude that you should kill Grandma to make the world a better place, Grandma too can conclude that she should kill herself to make the world a better place—or better yet, just give the money to charity right now without killing anyone!
But she hasn’t done so! This at the very least indicates that she doesn’t think those actions would be better. So we have a conflict: agents disagree about which actions have greater utility.
Utilitarianism alone cannot resolve this. It is based on the fiction that there exists some universal utility function that all agents will agree on. It’s a useful fiction for thought experiments, but it is a terrible model for reality.
You can’t even just apply some aggregating function to utility function of individual agents, since utility functions are scale and translation invariant per agent, while aggregation functions are not. The best you can do is limit yourself to some Pareto frontier, since those are scale and translation invariant, but it hardly takes a shining moral principle to say “don’t do things that make everybody including yourself worse off”.