Planning a series: discounting utility

I’m planning a top-level post (probably two or three or more) on when agent utility should not be part of utilitarian calculations—which seems to be an interesting and controversial topic given some recent posts. I’m looking for additional ideas, and particularly counterarguments. Also hunting for article titles. The series would look something like the following—noting that obviously this summary does not have much room for nuance or background argument. I’m assuming moral antirealism, with the selection of utilitarianism as an implemented moral system.

Intro—Utilitarianism has serious, fundamental measurement problems, and sometimes substantially contradicts our intuitions. One solution is to say our intuitions are wrong—this isn’t quite right (i.e. a morality can’t be “wrong”) unless our intuitions are internally inconsistent, which I do not think is the problem. This is particularly problematic because agents (especially with high self modification capacities) may face socially undesirable incentives. I argue that a better solution is to ignore or discount the utility of certain agents in certain circumstances. This better fits general moral intuitions. (There remains a debate as to whether Morality A might be better than Morality B when Morality B better matches our general intuitions—I don’t want to get into this, as I’m not sure there’s a non-circular meaning of “better” as applied to morality that does not relate to moral intuitions.)

1 -First, expressly anti-utilitarian utility can be disregarded. Most of the cases of this are fairly simple and bright-line. No matter how much Bob enjoys raping people, the utility he derives from doing so is irrelevant unless he drinks the utilitarian Koolaid and only, for example, engages in rape fantasies (in which case his utility is counted—the issue is not that his desire is bad, it’s that his actions are). This gets into some slight line-drawing problems with, for example, utility derived from competition (as one may delight in defeating people—this probably survives, however, particularly since it is all consensual).

1.5 - The above point is also related to the issue of discounting the future utility of such persons; I’m trying to figure out if it belongs in this sequence. The example I plan to use (which makes pretty much the entire point) is as follows. You have some chocolate ice cream you have to give away. You can give it to a small child and a person who has just brutally beaten and molested that child. The child kinda likes chocolate ice cream; vanilla is his favorite flavor, but chocolate’s OK. The adult absolutely, totally loves chocolate ice cream; it’s his favorite food in the world. I, personally, give the kid the ice cream, and I think so does well over 90% of the general population. On the other hand, if the adult were simply someone who had an interest in molesting children, but scrupulously never acted on it, I would not discount his utility so cheerfully. This may simply belong as a separate post on its own on the utility value of punishment. I’d be interested in feedback on it.

2 -Finally, and trickiest, is the problem of utility conditioned on false beliefs. Take two examples: an african village stoning a child to death because they think she’s a witch who has made it stop raining, and the same village curing that witch-hood by ritually dunking her in holy water (or by some other innocuous procedure). In the former case, there’s massive disutility that occurs because people will think it will solve a problem that it won’t (I’m also a little unclear on what it would mean for the utility of the many to “outweigh” the utility of the one, but that’s an issue I’ll address in the intro article). In the latter, there’s minimal disutility (maybe even positive utility), even though there’s the same impotence. The best answer seems to be that utility conditioned on false beliefs should be ignored to the extent that it is conditioned on false beliefs. Many people (myself included) celebrate religious holidays with no belief whatsoever in the underlying religion—there is substantial value in the gathering of family and community. Similarly, there is some value to the gathering of the community in both village cases; in the murder it doesn’t outweigh the costs, in the baptism it very well might.

3 - (tentative) How this approach coincides with the unweighted approach in the long term. Basically, if we ignore certain kinds of utility, we will encourage agents to pursue other kinds of utility (if you can’t burn witches to improve your harvest, perhaps you’ll learn how to rotate crops better). The utility they pursue is likely to be of only somewhat lower value to them (or higher value in some cases, if they’re imperfect, i.e. human). However, it will be of non-negative value to others. Thus, a policy-maker employing adjusted utilitarianism is likely to obtain better outcomes from an unweighted perspective. I’m not sure this point is correct or cogent.

I’m aware at least some of this is against lesswrong canon. I’m curious as to if people have counterarguments, objections, counterexamples, or general feedback on whether this would be a desirable series to spell out.