Others are critical of moral realism because it postulates the existence of a kind of “moral fact” which is nonmaterial and does not appear to be accessible to the scientific method. Moral truths cannot be observed in the same way as material facts (which are objective), so it seems odd to count them in the same category. One emotivist counterargument (although emotivism is usually non-cognitivist) alleges that “wrong” actions produce measurable results in the form of negative emotional reactions, either within the individual transgressor, within the person or people most directly affected by the act, or within a (preferably wide) consensus of direct or indirect observers.
Regarding the emotivist criticism, it begs a lot of questions. Surely not all negative emotional reactions signal wrong moral actions. Besides, emotivism isn’t aligned with moral realism.
That some criticisms of moral realism appear to lack coherence does not seem to me to be a point that counts against the idea.
I expect moral realists would deny that morality is any more nonmaterial than any other kind of information—and would also deny that it does not appear to be accessible to the scientific method.
If moral realism acts as a system of logical propositions and deductions, then it has to have moral axioms. How are these grounded in material reality? How can they be anything more than “because i said so and I hope you’ll agree”? Isn’t the choice of axioms done using a moral theory nominally opposed to moral realism, such as emotivism, or (amoral) utilitarianism?
One way would be to consider the future of civilization. At the moment, we observe a Shifting Moral Zeitgeist. However, in the future we may see ideas about how to behave towards other agents settle down into an optimal region. If that turns out to be a global optimum—rather than a local one—i.e. much the same rules would be found by most surviving aliens—then that would represent a good foundation for the ideas of moral realism.
Even today, it should be pretty obvious that some moral systems are “better” than others (“better” in the sense of promoting the survival of those systems). That doesn’t necessarily mean there’s a “best” one—but it leaves that possibility open.
Under the section “Criticisms”:
Regarding the emotivist criticism, it begs a lot of questions. Surely not all negative emotional reactions signal wrong moral actions. Besides, emotivism isn’t aligned with moral realism.
I see—thanks.
That some criticisms of moral realism appear to lack coherence does not seem to me to be a point that counts against the idea.
I expect moral realists would deny that morality is any more nonmaterial than any other kind of information—and would also deny that it does not appear to be accessible to the scientific method.
If moral realism acts as a system of logical propositions and deductions, then it has to have moral axioms. How are these grounded in material reality? How can they be anything more than “because i said so and I hope you’ll agree”? Isn’t the choice of axioms done using a moral theory nominally opposed to moral realism, such as emotivism, or (amoral) utilitarianism?
One way would be to consider the future of civilization. At the moment, we observe a Shifting Moral Zeitgeist. However, in the future we may see ideas about how to behave towards other agents settle down into an optimal region. If that turns out to be a global optimum—rather than a local one—i.e. much the same rules would be found by most surviving aliens—then that would represent a good foundation for the ideas of moral realism.
Even today, it should be pretty obvious that some moral systems are “better” than others (“better” in the sense of promoting the survival of those systems). That doesn’t necessarily mean there’s a “best” one—but it leaves that possibility open.