Three kinds of moral uncertainty

Related to: Moral uncertainty (wiki), Moral uncertainty—towards a solution?, Ontological Crisis in Humans.

Moral uncertainty (or normative uncertainty) is uncertainty about how to act given the diversity of moral doctrines. For example, suppose that we knew for certain that a new technology would enable more humans to live on another planet with slightly less well-being than on Earth[1]. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do? (LW wiki)

I have long been slightly frustrated by the existing discussions about moral uncertainty that I’ve seen. I suspect that the reason has been that they’ve been unclear on what exactly they mean when they say that we are “uncertain about which theory is right”—what is uncertainty about moral theories? Furthermore, especially when discussing things in an FAI context, it feels like several different senses of moral uncertainty get mixed together. Here is my suggested breakdown, with some elaboration:

Descriptive moral uncertainty. What is the most accurate way of describing my values? The classical FAI-relevant question, this is in a sense the most straightforward one. We have some set values, and although we can describe parts of them verbally, we do not have conscious access to the deep-level cognitive machinery that generates them. We might feel relatively sure that our moral intuitions are produced by a system that’s mostly consequentialist, but suspect that parts of us might be better described as deontologist. A solution to descriptive moral uncertainty would involve a system capable of somehow extracting the mental machinery that produced our values, or creating a moral reasoning system which managed to produce the same values by some other process.

Epistemic moral uncertainty. Would I reconsider any of my values if I knew more? Perhaps we hate the practice of eating five-sided fruit and think that everyone who eats five-sided fruit should be thrown to jail, but if we found out that five-sided fruit made people happier and had no averse effects, we would change our minds. This roughly corresponds to the “our wish if we knew more, thought faster” part of Eliezer’s original CEV description. A solution to epistemic moral uncertainty would involve finding out more about the world.

Intrinsic moral uncertainty. Which axioms should I endorse? We might be intrinsically conflicted between different value systems. Perhaps we are trying to choose whether to be loyal to a friend or whether to act for the common good (a conflict between two forms of deontology, or between deontology and consequentialism), or we could be conflicted between positive and negative utilitarianism. In its purest form, this sense of moral uncertainty closely resembles what would otherwise be called a wrong question, one where

you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question. When it doesn’t even seem possible to answer the question.

But unlike wrong questions, questions of intrinsic moral uncertainty are real ones that you need to actually answer in order to make a choice. They are generated when different modules within your brain generate different moral intuitions, and are essentially power struggles between various parts of your mind. A solution to intrinsic moral uncertainty would involve somehow tipping the balance of power in favor of one of the “mind factions”. This could involve developing an argument sufficiently persuasive to convince most parts of yourself, or self-modifying in such a way that one of the factions loses its sway over your decision-making. (Of course, if you already knew for certain which faction you wanted to expunge, you wouldn’t need to do it in the first place.) I would roughly interpret the “our wish … if we had grown up farther together” part of CEV to be an attempt to model some of the social influences on our moral intuitions and thereby help resolve cases of intrinsic moral uncertainty.

This is a very preliminary categorization, and I’m sure that it could be improved upon. There also seem to exist cases of moral uncertainty which are hybrids of several categories—for example, ontological crises seem to be mostly about intrinsic moral uncertainty, but to also incorporate some elements of epistemic moral uncertainty. I also have a general suspicion that these categories still don’t cut reality that well at the joints, so any suggestions for improvement would be much appreciated.