Nonparametric Ethics

(Inspired by a recent conversation with Robin Hanson.)

Robin Hanson, in his essay on “Minimal Morality”, suggests that the unreliability of our moral reasoning should lead us to seek simple moral principles:

“In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data. Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions. (This paper elaborates.)”

In “the limit of expecting very large errors of our moral intuitions”, says Robin, we should follow an extremely simple principle—the simplest principle we can find that seems to compress as much morality as possible. And that principle, says Robin, is that it is usually good for people to get what they want, if no one else objects.

Now I myself carry on something of a crusade against trying to compress morality down to One Great Moral Principle. I have developed at some length the thesis that human values are, in actual fact, complex, but that numerous biases lead us to underestimate and overlook this complexity. From a Friendly AI perspective, the word “want” in the English sentence above is a magical category.

But Robin wasn’t making an argument in Friendly AI, but in human ethics: he’s proposing that, in the presence of probable errors in moral reasoning, we should look for principles that seem simple to us, to carry out at the end of the day. The more we distrust ourselves, the simpler the principles.

This argument from fitting noisy data, is a kind of logic that can apply even when you have prior reason to believe the underlying generator is in fact complicated. You’ll still get better predictions from the simpler model, because it’s less sensitive to noise.

Even so, my belief that human values are in fact complicated, leads me to two objections and an alternative proposal:

The first objection is that we do, in fact, have enough data to support moral models that are more complicated than a small set of short English sentences. If you have a thousand data points, even noisy data points, it may be a waste of evidence to try to fit them to a straight line, especially if you have prior reason to believe the true generator is not linear.

And my second fear is that people underestimate the complexity and error-proneness of the reasoning they do to apply their Simple Moral Principles. If you try to reduce morality to the Four Commandments, then people are going to end up doing elaborate, error-prone rationalizations in the course of shoehorning their real values into the Four Commandments.

But in the ordinary practice of machine learning, there’s a different way to deal with noisy data points besides trying to fit simple models. You can use nonparametric methods. The classic example is k-nearest-neighbors: To predict the value at a new point, use the average of the 10 nearest points previously observed.

A line has two parameters—slope and intercept; to fit a line, we try to pick values for the slope and intercept that well-match the data. (Minimizing squared error corresponds to maximizing the likelihood of the data given Gaussian noise, for example.) Or we could fit a cubic polynomial, and pick four parameters that best-fit the data.

But the nearest-neighbors estimator doesn’t assume a particular shape of underlying curve—not even that the curve is a polynomial. Technically, it doesn’t even assume continuity. It just says that we think that the true values at nearby positions are likely to be similar. (If we furthermore believe that the underlying curve is likely to have continuous first and second derivatives, but don’t want to assume anything else about the shape of that curve, then we can use cubic splines to fit an arbitrary curve with a smoothly changing first and second derivative.)

And in terms of machine learning, it works. It is done rather less often in science papers—for various reasons, some good, some bad; e.g. academics may prefer models with simple extractable parameters that they can hold up as the triumphant fruits of their investigation: Behold, this is the slope! But if you’re trying to win the Netflix Prize, and you find an algorithm that seems to do well by fitting a line to a thousand data points, then yes, one of the next things you try is substituting some nonparametric estimators of the same data; and yes, this often greatly improves the estimates in practice. (Added: And conversely there are plenty of occasions where ridiculously simple-seeming parametric fits to the same data turn out to yield surprisingly good predictions. And lots of occasions where added complexity for tighter fits buys you very little, or even makes predictions worse. In machine learning this is usually something you find out by playing around, AFAICT.)

It seems to me that concepts like equality before the law, or even the notion of writing down stable laws in the first place, reflect a nonparametric approach to the ethics of error-prone moral reasoning.

We don’t suppose that society can be governed by only four laws. In fact, we don’t even need to suppose that the ‘ideal’ morality (obtained as the limit of perfect knowledge and reflection, etc.) would in fact subject different people and different occasions to the same laws. We need only suppose that we believe, a priori, that similar moral dilemmas are likely ceteris paribus to have similar resolutions, and that moral reasoning about adjustment to specific people is highly error-prone—that, given unlimited flexibility to ‘perfectly fit’ the solution to the person, we’re likely to favor our friends and relatives too much. (And not in an explicit, internally and externally visible way, that we could correct just by having a new rule not to favor friends and relatives.)

So instead of trying to recreate, each time, the judgment that is the perfect fit to the situation and the people, we try to use the ethical equivalent of a cubic spline—have underlying laws that are allowed to be complicated, but have to be written down for stability, and are supposed to treat neighboring points similarly.

Nonparametric ethics says: “Let’s reason about which moral situations are at least rough neighbors so that an acceptable solution to one should be at least mostly-acceptable to another; and let’s reason about where people are likely to be highly biased in their attempt to adjust to specifics; and then, to reduce moral error, let’s enforce similar resolutions across neighboring cases.” If you think that good moral codes will treat different people similarly, and/​or that people are highly biased in how they adjust their judgments to different people, then you will come up with the ethical solution of equality before the law.

Now of course you can still have laws that are too complicated, and that try to sneak in too much adaptation to particular situations. This would correspond to a nonparametric estimator that doesn’t smooth enough, like using 1-nearest-neighbor instead of 10-nearest-neighbors, or like a cubic spline that tried to exactly fit every point without trying to minimize the absolute value of third derivatives.

And of course our society may not succeed at similarly treating different people in similar situations—people who can afford lawyers experience a different legal system.

But if nothing else, coming to grips with the concept of nonparametric ethics helps us see the way in which our society is failing to deal with the error-proneness of its own moral reasoning.

You can interpret a fair amount of my coming-of-age as my switch from parametric ethics to nonparametric ethics—from the pre-2000 search for simple underlying morals and my attempts to therefore reject values that seemed complicated; to my later acceptance that my values were actually going to be complicated, and that both I and my AI designs needed to come to terms with that. Friendly AI can be viewed as the problem of coming up with—not the Three Simple Laws of Robotics that are all a robot needs—but rather a regular and stable method for learning, predicting, and renormalizing human values that are and should be complicated.