Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than “breakfast” or “love,” and has enough coherence – thingness – to be useful to try to outline and reason about.
The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.
I’ll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn’t always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.
None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it’s still simplest and I’d argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.
That said, I do think it’s valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.
Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than “breakfast” or “love,” and has enough coherence – thingness – to be useful to try to outline and reason about.
The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.
I’ll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn’t always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.
None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it’s still simplest and I’d argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.
That said, I do think it’s valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.