[Utilitarianism is] very good. It’s more or less reliably better than anything else.
That’s a sweeping claim. A number of people have made similar points, but I’ll weigh in aanyway:-
Its pretty nearly the case that there is nothing to judge an ethical theory by except intuition, and utilitarianism fares badly by that measure. (One can also judge a theory by how motivating it is, how consistent it is, and so on. These considerations might even make us go against direct intuition, but there is no point in a consistentl and/or motivating system that is basically wrong).
One problem with utilitarianism is that it tries to aggregate individual values, making it unable to handle the kinds of values that are only definable at group level, such as equality, liberty and fraternity.
Since it focuses on outcomes, it is also blind to the intention or level of deliberateness behind an act. Nothing could be more out of line with everyday practice, where “I didn’t mean to” is a perfectly good excuse, for all that it doesn’t change any outcomes.
Furthermore, it has problems with obligation and motivation.
The claim that the greatest good is the happiness of the greatest number has intuitive force to some, but regarded as an obligation it implies one must sacrifice oneself until one is no longer happier or better off than anyone else .. it is highly demanding. On the other hand, it is not clear where the obligation comes from, since the is-ought gap has not been closed. In the negative case, utilitarianism merely suggests morally worthy actions, without making them obligatory on anyone. It has only two non arbitrary points to set a level of obligation at, zero and the maximum.
Even if the bullet is bitten, and it us accepted that “maximum possible altruism is obligatory”, the usual link between obligations and punishments is broken. It would mean that almost everyone is failling their obligations but few are getting any punishment (even social disapproval).
That’s without even getting on to the problem arising from mathematically aggregating preferences, such as utility monstering, repugnant conclusions, etc.
That’s a sweeping claim. A number of people have made similar points, but I’ll weigh in aanyway:-
Its pretty nearly the case that there is nothing to judge an ethical theory by except intuition, and utilitarianism fares badly by that measure. (One can also judge a theory by how motivating it is, how consistent it is, and so on. These considerations might even make us go against direct intuition, but there is no point in a consistentl and/or motivating system that is basically wrong).
One problem with utilitarianism is that it tries to aggregate individual values, making it unable to handle the kinds of values that are only definable at group level, such as equality, liberty and fraternity.
Since it focuses on outcomes, it is also blind to the intention or level of deliberateness behind an act. Nothing could be more out of line with everyday practice, where “I didn’t mean to” is a perfectly good excuse, for all that it doesn’t change any outcomes.
Furthermore, it has problems with obligation and motivation. The claim that the greatest good is the happiness of the greatest number has intuitive force to some, but regarded as an obligation it implies one must sacrifice oneself until one is no longer happier or better off than anyone else .. it is highly demanding. On the other hand, it is not clear where the obligation comes from, since the is-ought gap has not been closed. In the negative case, utilitarianism merely suggests morally worthy actions, without making them obligatory on anyone. It has only two non arbitrary points to set a level of obligation at, zero and the maximum.
Even if the bullet is bitten, and it us accepted that “maximum possible altruism is obligatory”, the usual link between obligations and punishments is broken. It would mean that almost everyone is failling their obligations but few are getting any punishment (even social disapproval).
That’s without even getting on to the problem arising from mathematically aggregating preferences, such as utility monstering, repugnant conclusions, etc.