# Humans can be assigned any values whatsoever...

Humans have no values… nor do any agent. Unless you make strong assumptions about their rationality. And depending on those assumptions, you get humans to have any values.

## An agent with no clear preferences

There are three buttons in this world, B(0), B(1), and X, and one agent **H**.

B(0) and B(1) can be operated by **H**, while X can be operated by an outside observer. **H** will initially press button B(0); if ever X is pressed, the agent will switch to pressing B(1). If X is pressed again, the agent will switch back to pressing B(0), and so on. After a large number of turns N, **H** will shut off. That’s the full algorithm for **H**.

So the question is, what are the values/preferences/rewards of **H**? There are three natural reward functions that are plausible:

R(0), which is linear in the number of times B(0) is pressed.

R(1), which is linear in the number of times B(1) is pressed.

R(2) = I(E,X)R(0) + I(O,X)R(1), where I(E,X) is the indicator function for X being pressed an even number of times,I(O,X)=1-I(E,X) being the indicator function for X being pressed an odd number of times.

For R(0), we can interpret **H** as an R(0) maximising agent which X overrides. For R(1), we can interpret **H** as an R(1) maximising agent which X releases from constraints. And R(2) is the “**H** is always fully rational” reward. Semantically, these make sense for the various R(i)’s being a true and natural reward, with X=”coercive brain surgery” in the first case, X=”release **H** from annoying social obligations” in the second, and X=”switch which of R(0) and R(1) gives you pleasure”.

But note that there is no semantic implications here, all that we know is H, with its full algorithm. If we wanted to deduce its true reward for the purpose of something like Inverse Reinforcement Learning (IRL), what would it be?

## Modelling human (ir)rationality and reward

Now let’s talk about the preferences of an actual human. We all know that humans are not always rational (how exactly we know this is a very interesting question that I will be digging into). But even if humans were fully rational, the fact remains that we are physical, and vulnerable to things like coercive brain surgery (and in practice, to a whole host of other more or less manipulative techniques). So there will be the equivalent of “button X” that overrides human preferences. Thus, “not immortal and unchangeable” is in practice enough for the agent to be considered “not fully rational”.

Now assume that we’ve thoroughly observed a given human h (including their internal brain wiring), so we know the human policy π(h) (which determines their actions in all circumstances). This is, in practice all that we can ever observe—once we know π(h) perfectly, there is nothing more that observing h can teach us (ignore, just for the moment, the question of the internal wiring of h’s brain—that might be able to teach us more, but we’ll need extra assumptions).

Let R be a possible human reward function, and **R** the set of such rewards. A human (ir)rationality *planning algorithm* p (hereafter refereed to as a planner) is a map from **R** to the space of policies (thus p(R) says how a human with reward R will actually behave—for example, this could be bounded rationality, rationality with biases, or many other options). Say that the pair (p,R) is compatible if p(R)=π(h). Thus a human with rationality planner p and reward R would behave as h does.

What possible compatible pairs are there? Here are some candidates:

(p(0), R(0)), where p(0) and R(0) are some “plausible” or “acceptable” planners and reward functions (what this means is a big question).

(p(1), R(1)), where p(1) is the “fully rational” planner, and R(1) is a reward that fits to give the required policy.

(p(2), R(2)), where R(2)= -R(1), and p(2)= -p(1), where -p(R) is defined as p(-R); here p(2) is the “fully anti-rational” planner.

(p(3), R(3)), where p(3) maps all rewards to π(h), and R(3) is trivial and constant.

(p(4), R(4)), where p(4)= -p(0) and R(4)= -R(0).

## Distinguishing among compatible pairs

How can we distinguish between compatible pairs? At first appearance, we can’t. That’s because, by their definition of compatible, all pairs produce the correct policy π(h). And once we have π(h), further observations of h tell us nothing.

I initially thought that Kolmogorov or algorithmic complexity might help us here. But in fact:

**Theorem**: The pairs (p(i), R(i)), i ≥ 1, are either simpler than (p(0), R(0)), or differ in Kolmogorov complexity from it by a constant that is independent of (p(0), R(0)).

**Proof**: The cases of i=4 and i=2 are easy, as these differ from i=0 and i=1 by two minus signs. Given (p(0), R(0)), a fixed-length algorithm computes π(h). Then a fixed length algorithm defines p(3) (by mapping input to π(h)). Furthermore, given π(h) and any history η, a fixed length algorithm computes the action a(η) the agent will take; then a fixed length algorithm defines R(1)(η,a(η))=1 and R(1)(η,b)=0 for b≠a(η).

So the Kolmogorov complexity can shift between p and R (all in R for i=1,2, all in p for i=3), but it seems that the complexity of the *pair* doesn’t go up during these shifts.

This is puzzling. It seems that, in principle, one cannot assume anything about h’s reward at all! R(2)= -R(1), R(4)= -R(0), and p(3) is compatible with any possible reward R. If we give up the assumption of human rationality—which we must—it seems we can’t say anything about the human reward function. So it seems IRL must fail.

Yet, in practice, we can and do say a lot about the rationality and reward/desires of various human beings. We talk about ourselves being irrational, as well as others being so. How do we do this? What structure do we need to assume, and is there a way to get AIs to assume the same?

This the question I’ll try and partially answer in subsequent posts, using the example of the anchoring bias as a motivating example. The anchoring bias is one of the clearest of all biases; what is it that allows us to say, with such certainty, that it’s a bias (or at least a misfiring heuristic) rather than an odd reward function?

- Policy Alignment by 30 Jun 2018 0:24 UTC; 50 points) (
- Making a Difference Tempore: Insights from ‘Reinforcement Learning: An Introduction’ by 5 Jul 2018 0:34 UTC; 33 points) (
- Deconfusing Human Values Research Agenda v1 by 23 Mar 2020 16:25 UTC; 24 points) (
- Using lying to detect human values by 15 Mar 2018 11:37 UTC; 19 points) (
- Poker example: (not) deducing someone’s preferences by 8 Jun 2018 3:19 UTC; 16 points) (
- Normative assumptions: regret by 31 Oct 2017 13:50 UTC; 12 points) (
- Robustness to fundamental uncertainty in AGI alignment by 3 Mar 2020 23:35 UTC; 12 points) (
- Bias in rationality is much worse than noise by 31 Oct 2017 11:57 UTC; 11 points) (
- Pascal’s mugging in reward learning by 5 Nov 2017 19:44 UTC; 9 points) (
- Humans can be assigned any values whatsoever... by 13 Oct 2017 11:32 UTC; 7 points) (
- Normative assumptions: answers, emotions, and narratives by 3 Nov 2017 15:27 UTC; 7 points) (
- Using the (p,R) model to detect over-writing human values by 25 Oct 2017 16:18 UTC; 7 points) (
- Rationalising humans: another mugging, but not Pascal’s by 14 Nov 2017 15:46 UTC; 7 points) (
- RLHF by 12 May 2022 21:18 UTC; 7 points) (
- Preferences over non-rewards by 3 Nov 2017 15:28 UTC; 5 points) (
- Humans can be assigned any values whatsoever... by 24 Oct 2017 12:03 UTC; 1 point) (
- Bias in rationality is much worse than noise by 6 Nov 2017 11:08 UTC; 0 points) (
- Normative assumptions: regret by 6 Nov 2017 10:59 UTC; 0 points) (
- Reward learning summary by 28 Nov 2017 15:55 UTC; 0 points) (
- Kolmogorov complexity makes reward learning worse by 6 Nov 2017 20:08 UTC; 0 points) (
- Rationalising humans: another mugging, but not Pascal’s by 15 Nov 2017 12:07 UTC; 0 points) (

Linearity is an unnecessarily strong assumption here, “monotonically increasing” will do.I think I followed you 75% to 80% of the way with the math. Would it be fair to say that your main point is that due to the fact that certain combinations of rewards and mappings will always produce the same set of actions, and thus you can’t exactly know the way an agent values things?

One thing that I couldn’t tell if you addressed was how many possible compatible pairs of mappings and reward functions can exist for an agent. In you’re third to last paragraph, you say that “it seems we can’t say anything about the human reward function.” yet if there is a finite amount of compatible pairs, it seems we’ve gained at least some knowledge about what the agent might value.

The model m(3) is compatible with any reward function, so any reward function R can be valid for the agent. Now, it’s true that this pair (m(3), R) can be quite complex (since m(3) is very complex), but any R is compatible. (and most m’s are also compatible—any m that maps to π(h), technically, and “almost all” m’s are surjective).

Comment deleted, because I had accidentally posted it on this version of the article. See instead here.

Hmm, if I am to take a guess where you’ll be forced to go, it’s that your assumption about the observability of π(h) won’t hold since π(h) will be non-deterministic in a way that prevents us computationally from fully “observing” it, i.e. knowing when it halts.

Nope, it’s not that at all. Assuming π(h) is known (at least to the AI) makes the problem easier, not harder.