Emotional Involvement

Fol­lowup to: Evolu­tion­ary Psy­chol­ogy, Thou Art God­shat­ter, Ex­is­ten­tial Angst Factory

Can your emo­tions get in­volved in a video game? Yes, but not much. What­ever sym­pa­thetic echo of triumph you ex­pe­rience on de­stroy­ing the Evil Em­pire in a video game, it’s prob­a­bly not re­motely close to the feel­ing of triumph you’d get from sav­ing the world in real life. I’ve played video games pow­er­ful enough to bring tears to my eyes, but they still aren’t as pow­er­ful as the feel­ing of sig­nifi­cantly helping just one sin­gle real hu­man be­ing.

Be­cause when the video game is finished, and you put it away, the events within the game have no long-term con­se­quences.

Maybe if you had a ma­jor epiphany while play­ing… But even then, only your thoughts would mat­ter; the mere fact that you saved the world, in­side the game, wouldn’t count to­ward any­thing in the con­tin­u­ing story of your life.

Thus fails the Utopia of play­ing lots of re­ally cool video games for­ever. Even if the games are difficult, novel, and sen­sual, this is still the idiom of life chopped up into a se­ries of dis­con­nected epi­sodes with no last­ing con­se­quences. A life in which equal­ity of con­se­quences is force­fully en­sured, or in which lit­tle is at stake be­cause all de­sires are in­stantly fulfilled with­out in­di­vi­d­ual work—these like­wise will ap­pear as flawed Utopias of dis­pas­sion and angst. “Rich peo­ple with noth­ing to do” syn­drome. A life of dis­con­nected epi­sodes and unim­por­tant con­se­quences is a life of weak pas­sions, of emo­tional un­in­volve­ment.

Our emo­tions, for all the ob­vi­ous evolu­tion­ary rea­sons, tend to as­so­ci­ate to events that had ma­jor re­pro­duc­tive con­se­quences in the an­ces­tral en­vi­ron­ment, and to in­voke the strongest pas­sions for events with the biggest con­se­quences:

Fal­ling in love… birthing a child… find­ing food when you’re starv­ing… get­ting wounded… be­ing chased by a tiger… your child be­ing chased by a tiger… fi­nally kil­ling a hated en­emy...

Our life sto­ries are not now, and will not be, what they once were.

If one is to be con­ser­va­tive in the short run about chang­ing minds, then we can get at least some mileage from chang­ing the en­vi­ron­ment. A win­dowless office filled with highly repet­i­tive non-novel challenges isn’t any more con­ducive to emo­tional in­volve­ment than video games; it may be part of real life, but it’s a very flat part. The oc­ca­sional ex­cit­ing global eco­nomic crash that you had no per­sonal con­trol over, does not par­tic­u­larly mod­ify this ob­ser­va­tion.

But we don’t want to go back to the origi­nal sa­vanna, the one where you got a leg chewed off and then starved to death once you couldn’t walk. There are things we care about tremen­dously in the sense of hat­ing them so much that we want to drive their fre­quency down to zero, not by the most in­ter­est­ing way, just as quickly as pos­si­ble, what­ever the means. If you drive the thing it binds to down to zero, where is the emo­tion af­ter that?

And there are emo­tions we might want to think twice about keep­ing, in the long run. Does racial prej­u­dice ac­com­plish any­thing worth­while? I pick this as a tar­get, not be­cause it’s a con­ve­nient whip­ping boy, but be­cause un­like e.g. “bore­dom” it’s ac­tu­ally pretty hard to think of a rea­son tran­shu­mans would want to keep this neu­ral cir­cuitry around. Read­ers who take this as a challenge are strongly ad­vised to re­mem­ber that the point of the ques­tion is not to show off how clever and coun­ter­in­tu­itive you can be.

But if you lose emo­tions with­out re­plac­ing them, whether by chang­ing minds, or by chang­ing life sto­ries, then the world gets a lit­tle less in­volv­ing each time; there’s that much less ma­te­rial for pas­sion. And your mind and your life be­come that much sim­pler, per­haps, be­cause there are fewer forces at work—maybe even threat­en­ing to col­lapse you into an ex­pected plea­sure max­i­mizer. If you don’t re­place what is re­moved.

In the long run, if hu­mankind is to make a new life for it­self...

We, and our de­scen­dants, will need some new emo­tions.

This is the as­pect of self-mod­ifi­ca­tion in which one must above all take care—mod­ify­ing your goals. What­ever you want, be­comes more likely to hap­pen; to ask what we ought to make our­selves want, is to ask what the fu­ture should be.

Add emo­tions at ran­dom—bind pos­i­tive re­in­forcers or nega­tive re­in­forcers to ran­dom situ­a­tions and ways the world could be—and you’ll just end up do­ing what is prime in­stead of what is good. So adding a bunch of ran­dom emo­tions does not seem like the way to go.

Ask­ing what hap­pens of­ten, and bind­ing happy emo­tions to that, so as to in­crease hap­piness—or ask­ing what seems easy, and bind­ing happy emo­tions to that—mak­ing iso­lated video games ar­tifi­cially more emo­tion­ally in­volv­ing, for ex­am­ple—

At that point, it seems to me, you’ve pretty much given up on eu­daimo­nia and moved to max­i­miz­ing hap­piness; you might as well re­place brains with plea­sure cen­ters, and civ­i­liza­tions with he­do­nium plasma.

I’d sug­gest, rather, that one start with the idea of new ma­jor events in a tran­shu­man life, and then bind emo­tions to those ma­jor events and the sub-events that sur­round them. What sort of ma­jor events might a tran­shu­man life em­brace? Well, this is the point at which I usu­ally stop spec­u­lat­ing. “Science! They should be ex­cited by sci­ence!” is some­thing of a bit-too-ob­vi­ous and I dare say “nerdy” an­swer, as is “Math!” or “Money!” (Money is just our civ­i­liza­tion’s equiv­a­lent of ex­pected utilon bal­anc­ing any­way.) Creat­ing a child—as in my fa­vored say­ing, “If you can’t de­sign an in­tel­li­gent be­ing from scratch, you’re not old enough to have kids”—is one can­di­date for a ma­jor tran­shu­man life event, and any­thing you had to do along the way to cre­at­ing a child would be a can­di­date for new emo­tions. This might or might not have any­thing to do with sex—though I find that thought ap­peal­ing, be­ing some­thing of a tra­di­tion­al­ist. All sorts of in­ter­per­sonal emo­tions carry over for as far as my own hu­man eyes can see—the joy of mak­ing al­lies, say; in­ter­per­sonal emo­tions get more com­plex (and challeng­ing) along with the peo­ple, which makes them an even richer source of fu­ture fun. Fal­ling in love? Well, it’s not as if we’re try­ing to con­struct the Fu­ture out of any­thing other than our prefer­ences—so do you want that to carry over?

But again—this is usu­ally the point at which I stop spec­u­lat­ing. It’s hard enough to vi­su­al­ize hu­man Eu­topias, let alone tran­shu­man ones.

The es­sen­tial idiom I’m sug­gest­ing is some­thing akin to how evolu­tion gave hu­mans lots of lo­cal re­in­forcers for things that in the an­ces­tral en­vi­ron­ment re­lated to evolu­tion’s over­ar­ch­ing goal of in­clu­sive re­pro­duc­tive fit­ness. To­day, office work might be highly rele­vant to some­one’s sus­te­nance, but—even leav­ing aside the lack of high challenge and com­plex nov­elty—and that it’s not sen­su­ally in­volv­ing be­cause we don’t have na­tive brain­ware to sup­port the do­main—office work is not emo­tion­ally in­volv­ing be­cause office work wasn’t an­ces­trally rele­vant. If office work had been around for mil­lions of years, we’d find it a lit­tle less hate­ful, and ex­pe­rience a lit­tle more triumph on filling out a form, one sus­pects.

Now you might run away shriek­ing from the dystopia I’ve just de­picted—but that’s be­cause you don’t see office work as eu­daimonic in the first place, one sus­pects. And be­cause of the lack of high challenge and com­plex nov­elty in­volved. In an “ab­solute” sense, office work would seem some­what less te­dious than gath­er­ing fruits and eat­ing them.

But the idea isn’t nec­es­sar­ily to have fun do­ing office work. Just like it’s not nec­es­sar­ily the idea to have your emo­tions ac­ti­vate for video games in­stead of real life.

The idea is that once you con­struct an ex­is­tence /​ life story that seems to make sense, then it’s all right to bind emo­tions to the parts of that story, with strength pro­por­tional to their long-term im­pact. The anomie of to­day’s world, where we si­mul­ta­neously (a) en­gage in office work and (b) lack any pas­sion in it, does not need to carry over: you should ei­ther fix one of those prob­lems, or the other.

On a higher, more ab­stract level, this car­ries over the idiom of re­in­force­ment over in­stru­men­tal cor­re­lates of ter­mi­nal val­ues. In prin­ci­ple, this is some­thing that a purer op­ti­miza­tion pro­cess wouldn’t do. You need nei­ther hap­piness nor sad­ness to max­i­mize ex­pected util­ity. You only need to know which ac­tions re­sult in which con­se­quences, and up­date that pure prob­a­bil­ity dis­tri­bu­tion as you learn through ob­ser­va­tion; some­thing akin to “re­in­force­ment” falls out of this, but with­out the risk of los­ing pur­poses, with­out any plea­sure or pain. An agent like this is sim­pler than a hu­man and more pow­er­ful—if you think that your emo­tions give you a su­per­nat­u­ral ad­van­tage in op­ti­miza­tion, you’ve en­tirely failed to un­der­stand the math of this do­main. For a pure op­ti­mizer, the “ad­van­tage” of start­ing out with one more emo­tion bound to in­stru­men­tal events is like be­ing told one more ab­stract be­lief about which poli­cies max­i­mize ex­pected util­ity, ex­cept that the be­lief is very hard to up­date based on fur­ther ex­pe­rience.

But it does not seem to me, that a mind which has the most value, is the same kind of mind that most effi­ciently op­ti­mizes val­ues out­side it. The in­te­rior of a true ex­pected util­ity max­i­mizer might be pretty bor­ing, and I even sus­pect that you can build them to not be sen­tient.

For as far as my hu­man eyes can see, I don’t know what kind of mind I should value, if that mind lacks plea­sure and hap­piness and emo­tion in the ev­ery­day events of its life. Bear­ing in mind that we are con­struct­ing this Fu­ture us­ing our own prefer­ences, not hav­ing it handed to us by some in­scrutable ex­ter­nal au­thor.

If there’s some bet­ter way of be­ing (not just do­ing) that stands some­where out­side this, I have not yet un­der­stood it well enough to pre­fer it. But if so, then all this dis­cus­sion of emo­tion would be as moot as it would be for an ex­pected util­ity max­i­mizer—one which was not val­ued at all for it­self, but only val­ued for that which it max­i­mized.

It’s just hard to see why we would want to be­come some­thing like that, bear­ing in mind that moral­ity is not an in­scrutable light hand­ing down awful edicts from some­where out­side us.

At any rate—the hell of a life of dis­con­nected epi­sodes, where your ac­tions don’t con­nect strongly to any­thing you strongly care about, and noth­ing that you do all day in­vokes any pas­sion—this angst seems avert­ible, how­ever of­ten it pops up in poorly writ­ten Utopias.

Part of The Fun The­ory Sequence

Next post: “Se­ri­ous Sto­ries

Pre­vi­ous post: “Chang­ing Emo­tions