Rational Ethics

[Looking for feedback, particularly on links to related posts; I’d like to finish this out as a post on the main, provided there aren’t too many wrinkles for it to be salvaged.]

Morality as Fixed Computation, Abstracted Idealized Dynamics, as part of the Metaethics Sequence, discuss ethics as computation. This is a post primarily a response to these two posts, which discuss computation, and the impossibility of computing the full ethical ramifications of an action. Note that I treat morality as objective, which means, loosely speaking, that two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system.

On to the subject matter of this post—are Bayesian utilitarian ethics utilitarian? For you? For most people?

And, more specifically, is a rational ethics system more rational than a heuristics and culturally based one?

I would argue that the answer is, for most people, “No.”

The summary explanation of why: Because cultural ethics are functioning ethics. They have been tested, and work. They may not be ideal, but most of the “ideal” ethics systems that have been proposed in the past haven’t worked. In terms of Eliezer’s post, cultural ethics are the answers that other people have already agreed upon; they are ethical computations which have already been computed, and while there may be errors, most of the potential errors an ethicist might arrive upon have already been weeded out.

The longer explanation of why:

First and foremost, rationality, which I will use from here instead of the word “computation,” is -expensive-. “A witch did it”, or the equivalent “Magic!”, while not in fact conceptually simple, is in fact logically simple; the complexity is encoded in the concept, not the logic. The rational explanation for, say, static electricity, requires far more information about the universe, which for an individual who aspires to be a farmer because he likes growing things, may never be useful, and whose internalization may never pay for itself. It can be fully consistent with a rational attitude to accept irrational explanations, when you have no reasonable expectation that the rational explanation will provide any kind of benefit, or more exactly when the cost of the rational explanation exceeds its expected benefit.

Or, to phrase it another way, it’s not always rational to be rational.

Terminal Values versus Instrumental Values discusses some of the computational expenses involved in ethics. It’s a nontrivial problem.

Rationality is a -means-, not an ends. A “rational ethics system” is merely an ethical system based on logic, on reason. But if you don’t have a rational reason to adopt a rational ethics system, you’re failing before you begin; logic is a formalized process, but it’s still just a process. The reason for adopting a rational ethics system is the starting point, the beginning, of that process. If you don’t have a beginning, what do you have? An ends? That’s not rationality, that’s rationalization.

So the very first step in adopting a rational ethics system is determining -why- you want to adopt a rational ethics system. “I want to be more rational” is irrational.

“I want to know the truth” is a better reason for wanting to be rational.

But the question in turn must, of course, be “Why?”

“Truth has inherent value” isn’t an answer, because value isn’t inherent, and certainly not to truth. There is a blue pillow in a cardboard box to my left. This is a true statement. You have truth. Are you more valuable now? Has this truth enriched your life? There are some circumstances in which this information might be useful to you, but you aren’t in those circumstances, nor in any feasible universe will you be. It doesn’t matter if I lied about the blue pillow. If truth has inherent value, then every true statement must, in turn, inherit that inherent value. Not all truth matters.

A rational ethics system must have its axioms. “Rationality,” I hope I have established, is not a useful axiom, nor is “Truth.” It is the values that your ethics system seeks to maximize which are its most important axioms.

The truths that matter are the truths which directly relate to your moral values, to your ethical axioms. A rational ethics system is a means of maximizing those values—nothing more.

If you have a relatively simple set of axioms, a rational ethics system is relatively simple, if still potentially expensive to compute. Strict Randian Objectivism, for example, attempts to use human life as its sole primary axiom, which makes it a relatively simple ethical system. (I’m a less strict Objectivist, and use a different axiom, personal happiness, but this rarely leads to conflict with Randian Objectivism, which uses it as a secondary axiom.)

If, on the other hand, you, like most people, have a wide variety of personal values which you are attempting to maximize, attempting to assess each action on its ethical merits becomes computationally prohibitive.

Which is where heuristics, and inherited ethics, start to become pretty attractive, particularly when you share (and most people do, to more extent than they don’t) your culture’s ethical values.

If you share at least some of your culture’s ethical values, normative ethics can provide immense value to you, by eliminating most of the work necessary in evaluating ethical scenarios. You don’t need to start from the bottom up, and prove to yourself that murder is wrong. You don’t need to weigh the pros and cons of alcoholism. You don’t need to prove that charity is a worthwhile thing to engage in.

“We all engage in ethics, though; it’s not like a farmer with static electricity, don’t we have a responsibility to understand ethics?”

My flippant response to this question is, should every driver know how to rebuild their car’s transmission?

You don’t need to be a rationalist in order to reevaluate your ethics. An expert can rebuild your transmission—an expert can also pose arguments to change your mind. This has, indeed, happened before on mass scales; racism is no longer broadly acceptable in our society. It took too long, yes, -but-, a long-established ethics system, being well-tested, should require extraordinary efforts to change. If it were easily mutable, it would lose much of its value, for it would largely be composed of poorly-tested ideas.

All of which is not to say that rational ethics are inherently irrational—only that one should have a rational reason for engaging in them to begin with. If you find that societal norms frequently conflict with your own ethical values, that is a good reason to engage in rational ethics. But if you don’t, perhaps you shouldn’t. And if you do, you should be cautious of pushing a rational ethics system on somebody for whom existing ethical systems do well, if your goal is to improve their well-being.