A Sketch of an Anti-Realist Metaethics

Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.

The Map is Not the Territory Reviewed

Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:

Since our predictions don’t always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called “belief”, the second thingy “reality”.

Of course the map is not the territory.

Here is Albert Einstein making much the same analogy:

Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.

The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a ‘fact’ as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:

Moral Realism: ∃x(x ⊂ IM) & (x = M)

In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.

Moral anti-realism simply negates the above. ¬(∃x(x ⊂ IM) & (x = M)).

Now it might seem that, as long as our concept of morality doesn’t require the existence of entities like non-natural gods, which don’t appear to figure into an ideal map, moral realism must be true (where else but the territory could morality be?). The problem of ethics then, is chiefly one of finding a satisfactory reduction of moral language into sentences we are confident of finding in the IM. Moreover, the ‘folk’ meta-ethics certainly seems to be a realist one. People routinely use moral predicates and speak of having moral beliefs. “Stealing that money was wrong”, “I believe abortion is immoral”, “Hitler was a bad person”. In other words, in the maps people *actually have right now* a moral code seems to exist.


Beliefs vs. Preferences

But we don’t think talking about belief networks is sufficient for modeling an agent’s behavior. To predict what other agents will do we need to know both their beliefs and their preferences (or call them goals, desires, affect or utility function). And when we’re making our own choices we don’t think we’re responding merely to beliefs about the external world. Rather, it seems like we’re also responding to an internal algorithm that helps us decide between actions according to various criteria, many of which reference the external world.

The distinction between belief function and utility function shouldn’t be new to anyone here. I bring it up because the queer thing about moral statements is that they seem to be self-motivating. They’re not merely descriptive, they’re prescriptive. So we have a good reason to think that they call our utility function. One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual’s utility function rather than sentences describing the world.

Note that ‘expressions of an individual’s utility function’ is not the same as ‘sentences describing an individual’s utility function’. The latter is something like ‘I prefer chocolate to vanilla’ the former is something like ‘Mmmm chocolate!’. It’s how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.


Projectivism and Psychopathy

That our brains might routinely turn expressions of our utility function into properties of the external world shouldn’t be surprising. This was essentially Hume’s position. From the Stanford Encyclopedia of Philosophy.

Projectivism is best thought of as a causal account of moral experience. Consider a straightforward, observation-based moral judgment: Jane sees two youths hurting a cat and thinks “That is impermissible.” The causal story begins with a real event in the world: two youth performing actions, a suffering cat, etc. Then there is Jane’s sensory perception of this event (she sees the youths, hears the cat’s howls, etc.). Jane may form certain inferential beliefs concerning, say, the youths’ intentions, the cats’ pain, etc. All this prompts in Jane an emotion: She disapproves (say). She then “projects” this emotion onto her experience of the world, which results in her judging the action to be impermissible. In David Hume’s words: “taste [as opposed to reason] has a productive faculty, and gilding and staining all natural objects with the colours, borrowed from internal sentiment, raises in a manner a new creation” (Hume [1751] 1983: 88). Here, impermissibility is the “new creation.” This is not to say that Jane “sees” the action to instantiate impermissibility in the same way as she sees the cat to instantiate brownness; but she judges the world to contain a certain quality, and her doing so is not the product of her tracking a real feature of the world, but is, rather, prompted by an emotional experience.

This account has a surface plausibility. Moreover, it has substantial support in psychological literature. In particular, the behavior of psychopaths closely matches what we would expect if the projectivist thesis were true. The distinctive neurobiological feature of psychopathy is impaired function of the amygdala. The amygdala mainly associated with emotional processing and memory. Obviously, as a group psychopaths tend toward moral deficiency. But more importantly psychopaths fail to make the normal human distinction between morality and convention. Thus a plausible account of a moral judgment is that it requires both social convention and emotional reaction. See the work of Shaun Nichols, in particular this for an extended discussion of the implications of psychopathy on metaethics and his book for a broader, empirically informed account of sentimentalist morality. Auditory learners might benefit from this bloggingheads he did.

If the projectivist account is right the difference between non-cognitivism and error theory is essentially one of emphasis. If you want to call moral judgments beliefs based on the above account then you are an error theorist. If you think they’re a kind of pseudo-belief then you’re a non-cognitivist.


But utility functions are part of the territory described by the map!

Modeling reality has a recursive element which tends to generate considerable confusion over multiple domains. The issue is that somewhere in any good map of the territory will be a description of the agent doing the mapping. So agents end up with beliefs about what they believe and beliefs about what they desire. Thus, we might think there could be a set of sentences in IM that make up our morality so long as some of those sentences describe our utility function. That is, the motivational aspect of morality can be accounted for by including in the reduction both a) a sentence which describes what conditions are to be preferred to others and b) a statement which says that the agent prefers such conditions.

The problem is, our morality doesn’t seem completely responsive to hypothetical and counter-factual shifts in what our utility function is. That is, *if* I thought causing suffering in others was something I should do and I got good feelings from doing it that *wouldn’t* make causing suffering moral (though Sadist Jack might think it was). In other words, changing one’s morality function isn’t a way to change what is moral (perhaps this judgment is uncommon, we should test it).

This does not mean the morality subroutine of your utility function isn’t responsive to changes in other parts of the utility function. If you think fulfilling your own non-moral desires is a moral good then which actions are moral will depend on how your non-moral desires change. But hypothetical changes in our morality subroutine don’t change our moral judgments about our actions in the hypothetical. This is because when we make moral judgments we *don’t* look at our map of the world to find our what our morality says, rather we have an emotional reaction to a set of facts and that emotional reaction generates the moral belief. Below is a diagram that somewhat messily describes what I’m talking about.

On the left we have the external world which generates the sensory inputs our agent uses to form beliefs. Those beliefs are then input into the utility function, a subroutine of which is morality. The utility function outputs the action the agent chooses. On the right we have zoomed in on the green Map circle from the left. Here we see that the map includes moral ‘beliefs’ (note that this isn’t an ideal map) which have been projected from the morality subroutine in the utility function. Then we have, also within the Map, the self-representation of the agent which in turn includes her algorithms and mental states. Note that altering morality of the self-representation won’t change the output of the morality subroutine of the first level of the model. Of course, in an ideal map the self-representation would match the first level but that doesn’t change the causal or phenomenal story of how moral judgments are made.

Observe how easy it is to make category errors if this model is accurate. Since we’re projecting our moral subroutine onto our map and we’re depicting ourselves in the map it is very easy to think that morality is something we’re learning about from the external world (if not from sensory input then from a priori reflection!). Of course, morality is in the external world in a meaningful sense since our brains are in the external world. But learning what is in our brains is not motivating in the way moral judgments are supposed to be. This diagram explains why: the facts about our moral code in our self-representation are not directly connected to our choice circuits which cause us to perform actions. Simply stating what our brains are like will not activate our utility function and so the expressive content of moral language will be left out. This is Hume’s is-ought distinction- ‘ought’ sentences can’t be derived from ‘is’ sentences because ought sentences involve the activation of the utility function at the first level of the diagram, whereas ‘is’ sentences are exclusively part of the map.

And of course since agents can have different morality functions there are no universally compelling arguments.

The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it’s favor: it does not posit any ‘queer’ moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.