Decontextualizing Morality

I have a fundamental objection to the way a lot of people do utilitarian morality, which is that most people’s morality is context-dependent; copy and paste somebody from one context to another, and the morality of their lives and choices can become entirely different.

Somebody who lives an ordinary life in a utopia becomes a moral monster if you copy-paste them into a dystopia, even if their subjective experience is the same; in the utopia, they walked by a field of flowers without really thinking about them, where in the dystopia, they walk by rivers full of drowning children with the same lack of awareness or concern.

The Copenhagen Interpretation of Ethics discusses this kind of problem in general, but I think most people’s moral intuitions, even after adjusting for this, still sees the person who does nothing to improve the world in a utopia as morally neutral, where the person who does nothing to improve the world in a dystopia is at least a little morally repugnant. (Question: Do you think somebody who doesn’t save a drowning child is evil?)

In a sense, this is capturing our intuitions that a good person should do something if there is a problem, but that our morality shouldn’t be punitive if there isn’t a problem to fix—the person in the utopia gets a pass because we can’t distinguish between the person who would help, if there was something to help with, and the person who wouldn’t.

However, it seems a bit wonky to me, if our metaethics punish somebody for what basically amounts to luck. And if it seems odd that I use the word punish—how can it be punishment to assign moral values to things, that’s just a value statement—then I think maybe you don’t experience moral blameworthiness as pain. (I think this might be a central active ingredient in a particular kind of moral conscientiousness.)

But also, I think “Feeling like a good person” is something that should at least be available to people who are doing their best, and an ethical system that takes that away from somebody for moral luck is itself at least a little unethical.

Moral Context

Moral context is a very big thing; I’m not going to be able to do it justice. So I’m going to limit the context I deal with to the environmental context; there are other kinds of context which we may want to preserve. For instance, to return to the question of whether someone who walks by a drowning child is evil, there’s a ready reason why we should think of them as evil—because somebody who does that has demonstrated certain qualities of themselves (lack of regard for others) which we may want to categorize as evil—and this demonstration is context-dependent, in that if you live in a society full of drowning children such that you’d never stop saving them, the same lack of regard might be a self-preservation strategy that is harder to fault as evil. I don’t actually have a good argument against this; score one for virtue ethics.

So the argument for a decontextualized morality is not, in fact, a universal argument; context does in fact matter. But the decontextualization is, I think, particularly useful for considering abstract moral questions, and it is particularly useful when evaluating utilitarianism. That is, it is particularly useful when some of the context has already been stripped away, and when the moral systems itself is not highly context-dependent.

I can’t comment too much on deontology; I think it is already nearly completely decontextualized, such that I don’t think the concerns raised here apply.

Maximizing Utility Isn’t The Real Standard

I encountered a piece of criticism of a rationalist, that they are spending money on cryogenically freezing their brain, instead of using that money to pay for malaria nets, or something similar, as evidence that they didn’t live up to the ethical standard of utility maximization, and so shouldn’t be taken seriously. Likewise, apparently somebody else did exactly that, and canceled their cryogenic policy in favor of effective altruism.

I’m sure there’s an argument that the cryogenic freezing is actually really utility maximizing, but I’m equally sure this is, fundamentally, just a rationalization.

My moral intuitions say that what we want is to be able to say that the rationalist with the cryogenic policy is still a good person, but also that the person who canceled their policy is a good person who is extra-good. In particular, my moral intuitions say that we shouldn’t say the cryogenic rationalist has failed at all; instead, what I want is to say that somebody who gives up a chance at immortality has gone above and beyond the call of duty.

That is, I want to be able to say that somebody has succeeded at utilitarianism, even if they haven’t literally maximized utility. And I also want to say that what this other person has done is praiseworthy.

The utilitarian maxim of “Maximize utility” kind of fails at this.

Moral Luck

Contextualized morality has a perverse quality, in which a given standard of behavior can become worse if you live in a universe that is worse. That is, the more miserable the world you are in, the worse a given standard of behavior becomes; the better the world is, the less is expected of you. So the more miserable you can expect to be, the more unethical you can expect to be as well, even with nothing changing in your behavior relative to somebody in a less miserable universe.

My intuitions say this isn’t actually how we want ethics to work; in fact, this looks kind of like the thing Evil Uncle Ben would come up with. If this is how your ethics are supposed to work, your meta-ethics look kind of faulty.

Now, we want to make it so that the right choice is saving the child, but we don’t want to make it so you can’t engage in self-preservation in the world of drowning children, and also we want to make it so that the environmental context doesn’t have an undue effect on whether or not someone is a good person—we don’t want to punish people for having the bad moral luck to be born into a world full of misery.

Also, we want the solution to be aesthetic and simple.

An Aesthetic and Simple Solution

The solution in utilitarianism is simple and aesthetic, at least to me.

First, axiomatically, we measure utility against the counterfactual; if an action makes the world neither better nor worse compared to not taking that action, the utility of that action is 0.

Second, we stop pretending that the ethical maxim of utilitarianism is “Maximize utility”. Instead, we should acknowledge how utilitarianism is actually practiced; maintaining utility is the gold standard that makes you a basically decent person, and increasing utility is the aspirational standard which makes you a good person.

That is, the goal is for every action to be neutral or positive utility. This isn’t a change from how utilitarianism is actually practiced, mind—as far as I can tell, this is basically how most utilitarians actually practice utilitarianism—it’s just acknowledging the truth of the matter.

Note that moral luck is actually back again, but inverted; a person in a miserable world might actually be the morally lucky person, because it may be easier to do good in that world (make things actually better), just because there’s lots of low-hanging fruit around. Possibly due to loss aversion biases (I find moral luck more problematic when it is taking something away than when it is giving something out), I’m basically okay with this. (But also it seems a lot less perverse.)

The net result is, I think, a version of utilitarianism that is significantly less contextual; copy and pasting people around has a much smaller impact on the moral evaluation we have of them (and that we would want them to have of themselves), largely limited, I think, to the ways that I think living the same life can actually have different impacts on different societies, in particular focusing evaluation on whether a person makes a society better or worse.

Addendum: Utilitarianism as Decision Theory

There’s a deep counterargument that can be raised to this entire article, that utilitarian morality is about actions and not people, that talking about the “goodness” or “badness” of people is missing the point of what utilitarianism is all about, and reducing the entire ethical edifice to maximizing your personal score is antithetical to its core principles—but my moral intuitions disagree with at least part of that, and regard moral behavior as fundamentally being about agents, and in particular about how those agents evaluate themselves with respect to their actions both past and present. If keeping score is how you evaluate whether or not you’re a good person, I want you to feel like a good person exactly insomuch as you are a good person, so I want your score-keeping system to be a good one.

That is, I think the utilitarianism describes in that counterargument is not a moral theory at all, but rather something more like a decision theory.

It is perhaps a worthy decision theory, but it isn’t what I’m talking about.