There is insufficient basis for making such a comparison. It’s highly questionable that an ethical system can be “right” in the same way that a physical theory can be “right”. There is an obvious standard by which to evaluate the rightness of a scientific theory: just check whether its factual claims accurately describe reality. The “system-is” must match the “reality-is” But a moral system is made out of oughts, not descriptive statements. The “system-ought” should be matching… exactly what? We don’t even know, or aren’t able to talk about, our “reality-oughts” without reference to either our intuitions or our moral system. If the latter, any moral system is self-referential and thus with no necessary grounding in reality, and if the former, then our foundational morality is our system of moral intuitions, and an ethical system like utilitarianism necessarily describes it or formalises it and may be superfluous. And the entire thesis of your post is that “reality-oughts” may turn out to fly in the face of our intuitions. This undermines the only basis there is for solving the is-ought problem.
The reason you expect some morally unintuitive prescriptions to prevail seems to rely on choosing the systemically-consistent way out of extreme moral dilemmas, however repugnant it may be. Now (I should mention I’m a total pleb in physics, please contradict me if this is wrong) we generally know reality to be self-consistent by necessity, and we aspire towards building self-consistent physical models of the world, at the expense of intuitions. Doing otherwise is (?) including magic as a feature in our model of the world. In the moral realm, to accept inconsistency would be to accept hypocrisy as a necessity, which is emotionally unpalatable just like physical-system inconsistency is confusing. But it is not obvious that morality is ultimately self-consistent rather than tragic. Personally I incline towards the tragedy hypothesis. Bending over backwards for self-consistency seems to be a mistake, as evidenced by repugnant conclusions of one sort or another. The fact that your moral system pits consistency values against object-level values in extreme ethical dilemmas seems to be a strike against it rather than against those object-level values.
About utilitarianism specifically: if you have your zeitgeist-detection-goggles on, it’s easy to see utilitarianism as a product of its contemporary biases: influenced by a highly economical worldview. Utility can be described as moral currency in some aspects. It does even worse in introducing glitches and absurdities than its economical counterpart, because it’s a totalising ethical notion—one which aims to touch every aspect of human existence, instead of being confined to the economic realm. Utility is a quantitative approach to value that attempts to collapse qualitatively different values into one common currency of how much satisfaction can be extracted from any of them. My go-to example for this is Yudkowsky’s torture vs. dust specks, which fails to distinguish between bad and evil (nuances are, apparently, for unenlightened pre-moderns), upping the amount of badness to arbitrary levels until it supposedly surpasses evil. This kind of mindset is, at its most charitable understanding, a useful framework for a policy-maker that has to direct finite resources to alleviating either a common and slight health problem (say, common colds or allergies) or a rare and deadly disease. Again, a problem that is economic in nature, that has a dollar value. Utilitarianism is also popular around here for being more amenable to computational (AI) applications than other ethical systems. Beyond that, to hail it as the ultimate moral system is excessive and unwarranted.
There is insufficient basis for making such a comparison. It’s highly questionable that an ethical system can be “right” in the same way that a physical theory can be “right”. There is an obvious standard by which to evaluate the rightness of a scientific theory: just check whether its factual claims accurately describe reality. The “system-is” must match the “reality-is” But a moral system is made out of oughts, not descriptive statements. The “system-ought” should be matching… exactly what? We don’t even know, or aren’t able to talk about, our “reality-oughts” without reference to either our intuitions or our moral system. If the latter, any moral system is self-referential and thus with no necessary grounding in reality, and if the former, then our foundational morality is our system of moral intuitions, and an ethical system like utilitarianism necessarily describes it or formalises it and may be superfluous. And the entire thesis of your post is that “reality-oughts” may turn out to fly in the face of our intuitions. This undermines the only basis there is for solving the is-ought problem.
The reason you expect some morally unintuitive prescriptions to prevail seems to rely on choosing the systemically-consistent way out of extreme moral dilemmas, however repugnant it may be. Now (I should mention I’m a total pleb in physics, please contradict me if this is wrong) we generally know reality to be self-consistent by necessity, and we aspire towards building self-consistent physical models of the world, at the expense of intuitions. Doing otherwise is (?) including magic as a feature in our model of the world. In the moral realm, to accept inconsistency would be to accept hypocrisy as a necessity, which is emotionally unpalatable just like physical-system inconsistency is confusing. But it is not obvious that morality is ultimately self-consistent rather than tragic. Personally I incline towards the tragedy hypothesis. Bending over backwards for self-consistency seems to be a mistake, as evidenced by repugnant conclusions of one sort or another. The fact that your moral system pits consistency values against object-level values in extreme ethical dilemmas seems to be a strike against it rather than against those object-level values.
About utilitarianism specifically: if you have your zeitgeist-detection-goggles on, it’s easy to see utilitarianism as a product of its contemporary biases: influenced by a highly economical worldview. Utility can be described as moral currency in some aspects. It does even worse in introducing glitches and absurdities than its economical counterpart, because it’s a totalising ethical notion—one which aims to touch every aspect of human existence, instead of being confined to the economic realm. Utility is a quantitative approach to value that attempts to collapse qualitatively different values into one common currency of how much satisfaction can be extracted from any of them. My go-to example for this is Yudkowsky’s torture vs. dust specks, which fails to distinguish between bad and evil (nuances are, apparently, for unenlightened pre-moderns), upping the amount of badness to arbitrary levels until it supposedly surpasses evil. This kind of mindset is, at its most charitable understanding, a useful framework for a policy-maker that has to direct finite resources to alleviating either a common and slight health problem (say, common colds or allergies) or a rare and deadly disease. Again, a problem that is economic in nature, that has a dollar value. Utilitarianism is also popular around here for being more amenable to computational (AI) applications than other ethical systems. Beyond that, to hail it as the ultimate moral system is excessive and unwarranted.