Why we need a *theory* of human values

There have been multiple practical suggestions for methods about how we should extract the values of a given human. Here are four common classes of such methods:

  • Methods that put high weight on human (bounded) quasi-rationality, or revealed preferences. For example, we can assume the Kasparov was actually trying to win against DeepBlue, not trying desperately to lose while inadvertently playing excellent chess.

  • Methods that pay attention to our explicitly stated values.

  • Methods that use regret, surprise, joy, or similar emotions, to estimate what humans actually want. This could be seen as a form of human TD learning.

  • Methods based on an explicit procedure for constructing the values, such as CEV and Paul’s indirect normativity.

Divergent methods

The first question is why we would expect these methods to point even vaguely in the same direction. They all take very different approaches—why do we think they’re measuring the same thing?

The answer is that they roughly match up in situations we encounter everyday. In such typical situations, people who feel regret are likely to act to avoid that situation again, to express displeasure about the situation, etc.

By analogy, consider a town where there are only two weather events: bright sunny days and snow storms. In that town there is a strong correlation between barometric pressure, wind speed, cloud cover, and temperature. All four indicators track different things, but, in this town, they are basically interchangeable.

But if the weather grows more diverse, this correlation can break down. Rain storms, cloudy days, meteor impacts: all these can disrupt the alignment of the different indicators.

Similarly, we expect that an AI could remove us from typical situations and put us into extreme situations—at least “extreme” from the perspective of the everyday world where we forged the intuitions that those methods of extracting values roughly match up. Not only do we expect this, but we desire this: a world without absolute poverty, for example, is the kind of world we would want the AI to move us into, if it could.

In those extreme and unprecedented situations, we could end up with revealed preferences pointing one way, stated preferences another, while regret and CEV point in different directions entierly. In that case, we might be tempted to ask “should we follow regret or stated preferences?” But that would be the wrong question to ask: our methods no longer correlated with each other, let alone with some fundamental measure of human values.

We are thus in an undefined state; in order to continue, we need a meta-method that decides between the different methods. But what criteria could such meta-method use for deciding (note that simply getting human feedback is not generically an option)? Well, it would have to select the method which best matches up with human values in this extreme situation. To do that, it needs a definition—a theory—of what human values actually are.

Underdefined methods

The previous section understates the problems with purely practical ways of assessing human values. It pointed out divergences between the methods in “extreme situations”. Perhaps we were imagining these extreme situations as the equivalent of a meteor impact on weather system: bizarre edge cases where reasonable methods finally break down.

But all those actually methods fail in typical situations as well. If we interpret the methods naively, they fail often. For example, in 1919, some of the Chicago White Sox baseball team were actually trying to lose. If we ask someone their stated values in a political debate or a courtroom, we don’t expect an honest answer. Emotion based approaches fail in situations where humans deliberately expose themselves to nostalgia, or fear, or other “negative” emotions (eg through scary movies). And there are failure modes for the explicit procedures, too.

This is true if we interpret the methods naively. If we were more “reasonable” or “sophisticated”, we would point out that don’t expect those methods to be valid in every typical situation. In fact, we can do better than that: we have a good intuitive understanding of when the methods succeed and when they fail, and different people have similar intuitions (we all understand that people are more honest in relaxed private settings that stressful public ones, for example). It’s as if we lived in a town with either sunny days or snow storms except on weekends. Then everyone could agree that the different indicators correlate during the week. So the more sophisticated methods would include something like “ignore the data if it’s Saturday or Sunday”.

But there are problems with this analogy. Unlike for the weather, there are no clear principle for deciding when it’s the equivalent of the weekend. Yes, we have an intuitive grasp of when stated preferences fail, for instance. But as Moravec’s paradox shows, an intuitive understanding doesn’t translate into an explicit, formal definition—and it’s that kind of formal definition that we need if we want to code up those methods. Even worse, we don’t all agree as to when the methods fail. For example, some economists deny the very existence of mental illness, while psychiatrists (and most laypeople) very much feel these exist.

Human judgement and machine patching

So figuring out whether the methods apply is an exercise in human judgement. Figuring out whether the methods have gone wrong is a similar exercise (see the Last Judge in CEV). And figuring out what to do when they don’t apply is also an exercise in human judgement—if we judge that someone is lying about their stated preferences, we could just reverse their statement to get their true values.

So we need to patch the methods using our human judgement. And probably patch the patches and so on. Not only is the patching process a terrible and incomplete way of constructing a safe goal for the AI, but human judgements are not consistent—we can be swayed in things as basic as whether a behaviour is rational, let alone all the situational biases that cloud our assessments of more complicated issues.

So obviously, the solution to these problems is to figure out which human is best in their judgements, and then to see under what circumstances these judgements can be least biased, and how to present the information to them in the most impartial way and then automate that judgement...

Stop that. It’s silly.. The correct solution is not to assess the rationality of human judgements of methods of extracting human values. The correct solution is to come up with a better theoretical definition of what human values are. Armed with such a theory, we can resolve or ignore the above issues in a direct and principled way.

Building a theory of human values

Just because we need a theory of human values, doesn’t mean that it’s easy to find one—the universe is cruel like that.

A big part of my current approach is to build such a theory. I will present an overview of my theory in a subsequent post, though most of the pieces have appeared in past posts already. My approach uses three key components:

  1. A way of defining the basic preferences (and basic meta-preferences) of a given human, even if these are under-defined or situational.

  2. A method for synthesising such basic preferences into a single utility function or similar object.

  3. A guarantee we won’t end up in a terrible place, due to noise or different choices in the two definitions above.