Not for the Sake of Happiness (Alone)

When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, “I agree that such pills are probably possible, but I wouldn’t voluntarily take them.

And Stock said, “But they’ll be so much better that the real thing won’t be able to compete. It will just be way more fun for you to take the pills than to do all the actual scientific work.”

And I said, “I agree that’s possible, so I’ll make sure never to take them.”

Stock seemed genuinely surprised by my attitude, which genuinely surprised me.

One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy. (In particular, Sam Harris does this in The End of Faith, which I just finished perusing—though Harris’s reduction is more of a drive-by shooting than a major topic of discussion.)

This isn’t the same as arguing whether all happinesses can be measured on a common utility scale—different happinesses might occupy different scales, or be otherwise non-convertible. And it’s not the same as arguing that it’s theoretically impossible to value anything other than your own psychological states, because it’s still permissible to care whether other people are happy.

The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.

We can easily list many cases of moralists going astray by caring about things besides happiness. The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they’d said, “Hey, whatever turns you on.” But this doesn’t show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.

It is an undeniable fact that we tend to do things that make us happy, but this doesn’t mean we should regard the happiness as the only reason for so acting. First, this would make it difficult to explain how we could care about anyone else’s happiness—how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.

Second, just because something is a consequence of my action doesn’t mean it was the sole justification. If I’m writing a blog post, and I get a headache, I may take an ibuprofen. One of the consequences of my action is that I experience less pain, but this doesn’t mean it was the only consequence, or even the most important reason for my decision. I do value the state of not having a headache. But I can value something for its own sake and also value it as a means to an end.

For all value to be reducible to happiness, it’s not enough to show that happiness is involved in most of our decisions—it’s not even enough to show that happiness is the most important consequent in all of our decisions—it must be the only consequent. That’s a tough standard to meet. (I originally found this point in a Sober and Wilson paper, not sure which one.)

If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I’d have to say no. I can’t think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery. It may seem difficult to disentangle these values, but the pills make it clearer.

I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper. I would be disturbed even if they weren’t aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness. Again, the pills make it clearer: I’m not just concerned with my own awareness of the uncomfortable fact. I wouldn’t put myself into a holodeck even if I could take a pill to forget the fact afterward. That’s simply not where I’m trying to steer the future.

I value freedom: When I’m deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts. The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome. Even if people wouldn’t know they were being manipulated, it would matter to my judgment of how well humanity had done with its future. This is an important ethical issue, if you’re dealing with agents powerful enough to helpfully tweak people’s futures without their knowledge.

So my values are not strictly reducible to happiness: There are properties I value about the future that aren’t reducible to activation levels in anyone’s pleasure center; properties that are not strictly reducible to subjective states even in principle.

Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship...

And I’m okay with that. I value a life complicated enough to be challenging and aesthetic—not just the feeling that life is complicated, but the actual complications—so turning into a pleasure center in a vat doesn’t appeal to me. It would be a waste of humanity’s potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.