This is a perfect demonstration of what I wrote recently about moral theories that try to accomplish too much.
Moral/ethical theories are attempts to formalize our moral intuitions. Preference utilitarianism extrapolates the heuristic “give people what they want”, and eventually hits the question “but what if they want something that’s bad for them?” Happiness utilitarianism extrapolates the heuristic “make people happy”, and eventually hits the question “but what if they don’t want to be happy?” Thus, they conflict.
This conflict is unavoidable. If extrapolation of any one moral heuristic was easy and satisfactory in all cases, we wouldn’t have multiple heuristics to begin with! Satisfying a single simple human desire, and not others, is undesirable because humans have independent and fundamentally conflicting desires which can’t be mapped to a single linear scale to be maximized.
IMO, attempts to define a sufficiently ingenious and complex scale which could be maximized in all cases have the wrong goal. They disregard the fundamental complexity of human value.
People like simple, general theories. They want a formal model of morality because it would tell them what to do, absolve them of responsibility for moral mistakes, and guarantee agreement with others even when moral intuitions disagree (ha). This isn’t just a hard goal to reach, it may be the wrong goal entirely. Moral heuristics with only local validity (as described in the comment I linked above) may be not just be easier but better, precisely because they avoid repugnant conclusions and don’t require difficult, improbable behavior in following them.
This is a perfect demonstration of what I wrote recently about moral theories that try to accomplish too much.
Moral/ethical theories are attempts to formalize our moral intuitions. Preference utilitarianism extrapolates the heuristic “give people what they want”, and eventually hits the question “but what if they want something that’s bad for them?” Happiness utilitarianism extrapolates the heuristic “make people happy”, and eventually hits the question “but what if they don’t want to be happy?” Thus, they conflict.
This conflict is unavoidable. If extrapolation of any one moral heuristic was easy and satisfactory in all cases, we wouldn’t have multiple heuristics to begin with! Satisfying a single simple human desire, and not others, is undesirable because humans have independent and fundamentally conflicting desires which can’t be mapped to a single linear scale to be maximized.
IMO, attempts to define a sufficiently ingenious and complex scale which could be maximized in all cases have the wrong goal. They disregard the fundamental complexity of human value.
People like simple, general theories. They want a formal model of morality because it would tell them what to do, absolve them of responsibility for moral mistakes, and guarantee agreement with others even when moral intuitions disagree (ha). This isn’t just a hard goal to reach, it may be the wrong goal entirely. Moral heuristics with only local validity (as described in the comment I linked above) may be not just be easier but better, precisely because they avoid repugnant conclusions and don’t require difficult, improbable behavior in following them.