Some thoughts on relations between major ethical systems

On the recent LessWrong/​CFAR Census Survey, I hit the following question:

Which of the following major ethical systems do you subscribe to:

1) Consequentialism

2) Deontology

3) Virtue Ethics

4) Other

To my own surprise, I couldn’t come up with a clear answer. I certainly don’t consistently apply one of these things across every decision I make in my life, and yet I consider myself at least mediocre on the scale of moral living, if not actually Neutral Good. So what is it I’m actually doing, and how can I behave more ethical-rationally?

Well, to analyze my own cognitive algorithms, I do think I can actually place these various codes of ethics in relation to each other. Basically, looked at behavioristically/​algorithmically, they vary across how much predictive power I have, my knowledge of my own values, and what it is I’m actually trying to affect.

Consequentialism is the ethical algorithm I consider useful in situations of greatest predictive power and greatest knowledge of my own values. It is, so to speak, the ethical-algorithmic ideal. In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment, consequentialism offers a complete ideal for ethical action over a complete spectrum of moral values for affecting both the universe and myself (but I repeat: I’m part of the universe).

However, in almost all real situations, I don’t have perfect predictive knowledge—not of the “external” universe and not of my own values. In these situations, I can, however, use my incomplete and uncertain knowledge to find acceptable heuristics that I can expect to yield roughly monotonic behavior: follow those rules, and my actions will generally have positive effects. This kind of thinking quickly yields up recognizable, regular moral commandments like, “You will not murder” or “You will not charge interest above this-or-that amount on loans”. Yes, of course we can come up with corner-case exceptions to those rules, and we can also elaborate logically on the rules to arrive at more detailed rules covering more circumstances. However, by the time we’ve fully elaborated out the basic commandments into a complete, obsessively-compulsively detailed legal code (oh hello Talmud), we’ve already covered most of the major general cases of moral action. We can now invent a criterion for how and when to transition from one level of ethical code to the one below it: our deontological heuristics should be detailed enough to handle any case where we lack the information (about consequences and values) to resort to consequentialism.

At first thought, virtue ethics seems like an even higher-level heuristic than deontological ethics. The problem is that, unlike deontological and consequentialist ethics, it doesn’t output courses of action to take, but instead short- and long-term states of mind or character that can be considered virtuous. So we don’t have the same thing here; it’s not a higher-level heuristic but a seemingly completely different form of ethics. I do think we can integrate it, however: virtue ethics just consists of a set of moral values over one’s own character. “What kind of person do I think is a good person?” might, by default, be a tautological question under strict consequentialism or deontology. However, when we take an account of the imperfect nature of real people (we are part of the universe, after all), we can observe that virtue ethics serves as a convenient guide to heuristics for becoming the sort of person who can be relied upon to take right actions when moral issues present themselves. Rather than simply saying, “Do the right thing no matter what” (an instruction that simply won’t drive real human beings to actually do the right thing), virtue ethics encourages us to cultivate virtues, moral cognitive biases towards at least a deontological notion of right action.

It’s also possible we might be able to separate virtue ethics into both heuristics over our own character, and actual values over our own character. These two approaches to virtue ethics should then converge in the presence of perfect information: if I knew myself utterly, my heuristics for my own character would exactly match my values over my own character.

This is my first effort at actually blogging on rationality subjects, so I’m hoping it’s not covering something hashed and rehashed, over and over again, in places like the Sequences, of which I certainly can’t attest a full knowledge.