Dissolving moral philosophy: from pain to meta-ethics
This is an extract from an appendix of one of my longer blog posts that I keep referring to.
What is pain? Why is pain bad?
It’s the same trick: we shouldn’t ask, “Why is pain negative,” but “Why do we think pain is negative?” Here’s the response in the form of a genealogy of morals:
Detectors for intense heat are extremely useful. Organisms without these detectors are replaced by those who react reflexively to heat. Muscle fatigue detectors are also extremely useful: Organisms without these detectors are replaced by those that react to these signals and conserve their muscle tissue. The same goes for the dangerous mechanical and chemical stimulation conveyed by type-C fibers.
In the brain, the brain constantly processes different signals. However, signals from type-C fibers are hard-coded as high-priority signals. The brain’s attention then focuses on these signals, and in cases of severe pain, it’s impossible to focus on anything else.
Next, we start to moan, scream, and cry to alert other tribe members that we are in trouble.
There are many different forms of suffering: Physical Pain, Emotional Suffering, Psychological Trauma, Existential Anguish, Suffering from Loss, Chronic Illnesses, and Addiction. Suffering is a broad term that includes various phenomena. However, at its core, it refers to states that individuals could choose to remove if given the option.
If you concentrate sufficiently on pain and are able to deconstruct it dimension by dimension, feature by feature, you will understand that it’s a cluster of sensations not that different from other sensations. If you go through this process, you can even become like a Buddhist monk who is able to self-immolate without suffering.
At the end of the day, asking why pain is bad is tautological. “Bad” is a label that was created to describe a cluster of things that should be avoided, including pain.
What is Ethics?
We can continue the previous story:
Values: In the previous story, the tribe organizes itself by common values, and tribe members must learn these values to be good members of the tribe. Tribes coordinated through these values/rules/laws/protocols are tribes that survive longer. The village elder, stroking his beard, says: “Ah, X1 is good, X2 is bad.” This statement transmits the illusion of morality to the tribe members. It is a necessary illusion that is shared by the members of the tribe in order for the tribe to function effectively. It can be thought of as a mental meme, a software downloaded into the minds of individuals that guides their behavior appropriately.
Prophets and Philosophers then attempt to systematize this process by asking, “What criteria determine the goodness of X?” They create ad hoc verbal rules that align with the training dataset X1, X2, … XN, Y1, Y2, …,YN. One rule that fits the dataset reasonably well is ‘Don’t kill people of your own tribe’. That gets written in the holy book alongside other poor heuristics.
Flourishing Arab civilization: Merchants invent trade, and then mathematicians invent money! It’s really great to assign numbers to things, as it facilitates commerce.
Ethicists: Philosophers familiar with the use of numbers then try to assign values to different aspects of the world: “X1 is worth 3 utils! X2 is worth 5 utils!”. They call themselves utilitarians. Philosophers who are less happy with the use of numbers prefer sticking to hardcoded rules. They call themselves Deontologists. They often engage in arguments with each other.
Meta-ethicists: Philosophers who are witnessing disagreements among philosophers about what is the best system start writing about meta-ethics. Much of what they say is meh. Just as the majority of intellectual production in theology is done by people who are confused about the nature of the world, it seems to me that the majority of intellectual production in moral philosophy is done by people who are self-selected to spend years on those problems.
And note that I’ve never crossed Hume’s guillotine during the story.
I’m not sure what is your point here.
Also note there is Axiology (what things are good/bad?), Morality (what should you do?) and Law (what rules should be made / enforced?). It makes sense to try to figure out what is good, and to try to figure out what should you do, and what institution building activities are necessary.
I think it makes sense to work on these questions, they matter to me and so I see value in someone burning their FLOPs to help me and other people get easy to verify deductions. I also agree that current quality of such work is not that great (including yours).
So, I agree that at some level of abstraction any ought can be rationalized with an is. But, at some point, agents need to define meta-strategies for dealing with uncertain situations; for example—all the decision theories and thought experiments that are necessary to ground the rational frameworks used to evaluate and reason as to what any agent should do to maximize expected outcome based on the utility functions they ascribe an agent should have with respect to the world.
While there is no scientific justification or explanation for value beyond what we ascribe—thus there being no ontological basis for morals—we generally agree that reality is self-aware through our conscious experience. And unless everything is fundamentally conscious, or consciousness does not exist, then the various loci of subjectivity (however you want to define them) form the rational basis for value calculus. So then isn’t the debate on what constitutes consciousness, the ‘camps’ that argue over its definition, and the conclusions we draw from it, exactly what would be used to derive the desired utility recipients of such decision frameworks such a CEV? And is this not a moral philosophy and meta-ethical practice in of itself? Until that’s settled—the Camp #2 framework gives you a taxonomy for the structures whereby meta-ethics should be applied (without even a mysticism import), and Camp #1 uses a language that keeps morality ontologically (or least linguistically) inert.
At some point we adopt and agree on axioms where science does not give us the data to reason and those should be whatever we agree may have the highest utility. But by virtue of them not being determined by experiment beforehand we can only use counterfactual reasoning to agree on them—of which in of itself the counterfactual that we ought to have done this because we will do this becomes equally up for debate.
This is exactly the problem with is-ought. (almost) any ought can be backward-reasoned to an is, but it’s very hard to determine the causality and necessity of the relationships. The current ises lead to a large set of contradictory and incomplete oughts.