Emotional Involvement

Followup to: Evolutionary Psychology, Thou Art Godshatter, Existential Angst Factory

Can your emotions get involved in a video game? Yes, but not much. Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it’s probably not remotely close to the feeling of triumph you’d get from saving the world in real life. I’ve played video games powerful enough to bring tears to my eyes, but they still aren’t as powerful as the feeling of significantly helping just one single real human being.

Because when the video game is finished, and you put it away, the events within the game have no long-term consequences.

Maybe if you had a major epiphany while playing… But even then, only your thoughts would matter; the mere fact that you saved the world, inside the game, wouldn’t count toward anything in the continuing story of your life.

Thus fails the Utopia of playing lots of really cool video games forever. Even if the games are difficult, novel, and sensual, this is still the idiom of life chopped up into a series of disconnected episodes with no lasting consequences. A life in which equality of consequences is forcefully ensured, or in which little is at stake because all desires are instantly fulfilled without individual work—these likewise will appear as flawed Utopias of dispassion and angst. “Rich people with nothing to do” syndrome. A life of disconnected episodes and unimportant consequences is a life of weak passions, of emotional uninvolvement.

Our emotions, for all the obvious evolutionary reasons, tend to associate to events that had major reproductive consequences in the ancestral environment, and to invoke the strongest passions for events with the biggest consequences:

Falling in love… birthing a child… finding food when you’re starving… getting wounded… being chased by a tiger… your child being chased by a tiger… finally killing a hated enemy...

Our life stories are not now, and will not be, what they once were.

If one is to be conservative in the short run about changing minds, then we can get at least some mileage from changing the environment. A windowless office filled with highly repetitive non-novel challenges isn’t any more conducive to emotional involvement than video games; it may be part of real life, but it’s a very flat part. The occasional exciting global economic crash that you had no personal control over, does not particularly modify this observation.

But we don’t want to go back to the original savanna, the one where you got a leg chewed off and then starved to death once you couldn’t walk. There are things we care about tremendously in the sense of hating them so much that we want to drive their frequency down to zero, not by the most interesting way, just as quickly as possible, whatever the means. If you drive the thing it binds to down to zero, where is the emotion after that?

And there are emotions we might want to think twice about keeping, in the long run. Does racial prejudice accomplish anything worthwhile? I pick this as a target, not because it’s a convenient whipping boy, but because unlike e.g. “boredom” it’s actually pretty hard to think of a reason transhumans would want to keep this neural circuitry around. Readers who take this as a challenge are strongly advised to remember that the point of the question is not to show off how clever and counterintuitive you can be.

But if you lose emotions without replacing them, whether by changing minds, or by changing life stories, then the world gets a little less involving each time; there’s that much less material for passion. And your mind and your life become that much simpler, perhaps, because there are fewer forces at work—maybe even threatening to collapse you into an expected pleasure maximizer. If you don’t replace what is removed.

In the long run, if humankind is to make a new life for itself...

We, and our descendants, will need some new emotions.

This is the aspect of self-modification in which one must above all take care—modifying your goals. Whatever you want, becomes more likely to happen; to ask what we ought to make ourselves want, is to ask what the future should be.

Add emotions at random—bind positive reinforcers or negative reinforcers to random situations and ways the world could be—and you’ll just end up doing what is prime instead of what is good. So adding a bunch of random emotions does not seem like the way to go.

Asking what happens often, and binding happy emotions to that, so as to increase happiness—or asking what seems easy, and binding happy emotions to that—making isolated video games artificially more emotionally involving, for example—

At that point, it seems to me, you’ve pretty much given up on eudaimonia and moved to maximizing happiness; you might as well replace brains with pleasure centers, and civilizations with hedonium plasma.

I’d suggest, rather, that one start with the idea of new major events in a transhuman life, and then bind emotions to those major events and the sub-events that surround them. What sort of major events might a transhuman life embrace? Well, this is the point at which I usually stop speculating. “Science! They should be excited by science!” is something of a bit-too-obvious and I dare say “nerdy” answer, as is “Math!” or “Money!” (Money is just our civilization’s equivalent of expected utilon balancing anyway.) Creating a child—as in my favored saying, “If you can’t design an intelligent being from scratch, you’re not old enough to have kids”—is one candidate for a major transhuman life event, and anything you had to do along the way to creating a child would be a candidate for new emotions. This might or might not have anything to do with sex—though I find that thought appealing, being something of a traditionalist. All sorts of interpersonal emotions carry over for as far as my own human eyes can see—the joy of making allies, say; interpersonal emotions get more complex (and challenging) along with the people, which makes them an even richer source of future fun. Falling in love? Well, it’s not as if we’re trying to construct the Future out of anything other than our preferences—so do you want that to carry over?

But again—this is usually the point at which I stop speculating. It’s hard enough to visualize human Eutopias, let alone transhuman ones.

The essential idiom I’m suggesting is something akin to how evolution gave humans lots of local reinforcers for things that in the ancestral environment related to evolution’s overarching goal of inclusive reproductive fitness. Today, office work might be highly relevant to someone’s sustenance, but—even leaving aside the lack of high challenge and complex novelty—and that it’s not sensually involving because we don’t have native brainware to support the domain—office work is not emotionally involving because office work wasn’t ancestrally relevant. If office work had been around for millions of years, we’d find it a little less hateful, and experience a little more triumph on filling out a form, one suspects.

Now you might run away shrieking from the dystopia I’ve just depicted—but that’s because you don’t see office work as eudaimonic in the first place, one suspects. And because of the lack of high challenge and complex novelty involved. In an “absolute” sense, office work would seem somewhat less tedious than gathering fruits and eating them.

But the idea isn’t necessarily to have fun doing office work. Just like it’s not necessarily the idea to have your emotions activate for video games instead of real life.

The idea is that once you construct an existence /​ life story that seems to make sense, then it’s all right to bind emotions to the parts of that story, with strength proportional to their long-term impact. The anomie of today’s world, where we simultaneously (a) engage in office work and (b) lack any passion in it, does not need to carry over: you should either fix one of those problems, or the other.

On a higher, more abstract level, this carries over the idiom of reinforcement over instrumental correlates of terminal values. In principle, this is something that a purer optimization process wouldn’t do. You need neither happiness nor sadness to maximize expected utility. You only need to know which actions result in which consequences, and update that pure probability distribution as you learn through observation; something akin to “reinforcement” falls out of this, but without the risk of losing purposes, without any pleasure or pain. An agent like this is simpler than a human and more powerful—if you think that your emotions give you a supernatural advantage in optimization, you’ve entirely failed to understand the math of this domain. For a pure optimizer, the “advantage” of starting out with one more emotion bound to instrumental events is like being told one more abstract belief about which policies maximize expected utility, except that the belief is very hard to update based on further experience.

But it does not seem to me, that a mind which has the most value, is the same kind of mind that most efficiently optimizes values outside it. The interior of a true expected utility maximizer might be pretty boring, and I even suspect that you can build them to not be sentient.

For as far as my human eyes can see, I don’t know what kind of mind I should value, if that mind lacks pleasure and happiness and emotion in the everyday events of its life. Bearing in mind that we are constructing this Future using our own preferences, not having it handed to us by some inscrutable external author.

If there’s some better way of being (not just doing) that stands somewhere outside this, I have not yet understood it well enough to prefer it. But if so, then all this discussion of emotion would be as moot as it would be for an expected utility maximizer—one which was not valued at all for itself, but only valued for that which it maximized.

It’s just hard to see why we would want to become something like that, bearing in mind that morality is not an inscrutable light handing down awful edicts from somewhere outside us.

At any rate—the hell of a life of disconnected episodes, where your actions don’t connect strongly to anything you strongly care about, and nothing that you do all day invokes any passion—this angst seems avertible, however often it pops up in poorly written Utopias.

Part of The Fun Theory Sequence

Next post: “Serious Stories

Previous post: “Changing Emotions