Inefficient Doesn’t Mean Indifferent

Link post

Many people, including Bryan Caplan and Robin Hanson, use the following form of argument a lot. It could be considered the central principle of the (excellent) The Elephant in the Brain. It goes something like:

  1. People say they want X, and they do Y to get it.

  2. If people did C, they would get X, and the price of C is cheap!

  3. Therefore, people really value X at less than the price of C, so they don’t really care much about X.

There’s something very perverse going on here. We’re using people trying to get X in an inefficient way as evidence they don’t care about X, rather than as evidence that people aren’t efficient.

The trick is, there’s a lot of assumptions hidden in the above logic. In practice, they rarely hold outside of simple cases (e.g. consumption goods).

The motivating example was Bryan Caplan using this one in The Case Against Education:

  1. People say they want smart employees, and look at school records to get it.

  2. If people gave out IQ tests, they would get smart employees, and testing is cheap!

  3. Therefore, people don’t really value smart employees.

In that case, I agree. Employers (most often) don’t want smart employees beyond a threshold requirement. But local validity is vital, and you can’t do that.

There are lots of reasons why one might not want to do C.

As a minimal first step, people have to believe that strategy C would work. A recent example of Robin Hanson using this technique, that violates that requirement, from How Best Help Distant Future?, could be summarized this way:

  1. People say they want to help the future, and lobby for policies they think help.

  2. If people saved money to help the far future, which they almost never do, they could help more, and since you get real returns from it, it’s really cheap!

  3. Therefore, people don’t much care about the far future.

In that case, I strongly disagree. People rightfully do not have faith that saving money now to help the far future will result in the far future being helped. Perhaps it would, but there’s a lot of assumptions that case relies upon, many of which most folks disagree with – about when money will have how much impact (especially if you expect a singularity to happen), about what you can expect real returns to be especially in the worlds that need help most, about whether that money is likely to be confiscated, about whether the money if not confiscated would actually get spent in a useful way when the time comes, about what that spend will then crowd out, about whether that savings represents the creation or saving of real resources, about what virtues and habits such actions cultivate, and so forth.

(I don’t think that saving and investing money to spend in the far future is obviously a good or bad way to impact the far future.)

More generally, human actions accomplish, signal and cost many things. A lot of considerations go into our decisions, including seemingly trivial inconveniences. One should never assume that a given option is open to people, or that they know about it, or that they’re confident it would work, or that they’re confident it wouldn’t have hidden costs, or that it doesn’t carry actual large costs you don’t realize, and so forth.

The argument depends on the assumption that humans are maximizing. They’re not. Humans are not automatically strategic. The standard reaction to ‘I actually really, really do want to help the far future’ is not to take exactly those actions that maximize far future impact. The standard reaction to ‘I actually really, really care about hiring the smartest employees’ is not to give them an IQ test because that would be mildly socially awkward and carries unknown risks. Because people, to a first approximation, don’t do things, and certainly don’t do things that aren’t typically done.

If something is mildly socially awkward or carries unknown risks, or just isn’t the normal thing to do (and thus, might involve the things above on priors), it probably won’t happen, even if it would get people something they care a lot about.

So if I see you not maximizing far future impact, and accuse you of not caring much about the far future, a reasonable response would be that people don’t actually maximize much of anything. Another would be, I care about many other things too, and I’m helping, so get off your damn high horse.

A very toxic counter-argument to that is to treat all considerations as fungible and translatable to utility or dollars, again assume maximization, and assert this proves you ‘don’t really care’ about X.

An extreme version of this, to (possibly uncharitably, I’m not sure) paraphrase of part of a post by Gwern on Denmark:

  1. Denmark helps the people of Greenland via subsidy.

  2. Helping people in Greenland is expensive. Denmark could help many more people if it instead helped other people with that money.

  3. Therefore, Danish people are moral monsters.

This is a general (often implicit, occasionally explicit) argument that seems like a version of the Copenhagen Interpretation of Ethics: If you help anyone anywhere, you are blameworthy, because you could have spent more resources helping, but even more so because you could have spent those resources more effectively. So you’re terrible. You clearly don’t care about helping people – in fact, you are bad and you should feel bad, worse than if you never helped people at all. At least then you wouldn’t be a damned hypocrite.

This threatens to indict everyone for almost every action they take. It is incompatible with civilization, with freedom, or with living life as a human. And isn’t true. So please, please, stop it.