Sympathetic Minds

“Mir­ror neu­rons” are neu­rons that are ac­tive both when perform­ing an ac­tion and ob­serv­ing the same ac­tion—for ex­am­ple, a neu­ron that fires when you hold up a finger or see some­one else hold­ing up a finger. Such neu­rons have been di­rectly recorded in pri­mates, and con­sis­tent neu­roimag­ing ev­i­dence has been found for hu­mans.

You may re­call from my pre­vi­ous writ­ing on “em­pathic in­fer­ence” the idea that brains are so com­plex that the only way to simu­late them is by forc­ing a similar brain to be­have similarly. A brain is so com­plex that if a hu­man tried to un­der­stand brains the way that we un­der­stand e.g. grav­ity or a car—ob­serv­ing the whole, ob­serv­ing the parts, build­ing up a the­ory from scratch—then we would be un­able to in­vent good hy­pothe­ses in our mere mor­tal life­times. The only pos­si­ble way you can hit on an “Aha!” that de­scribes a sys­tem as in­cred­ibly com­plex as an Other Mind, is if you hap­pen to run across some­thing amaz­ingly similar to the Other Mind—namely your own brain—which you can ac­tu­ally force to be­have similarly and use as a hy­poth­e­sis, yield­ing pre­dic­tions.

So that is what I would call “em­pa­thy”.

And then “sym­pa­thy” is some­thing else on top of this—to smile when you see some­one else smile, to hurt when you see some­one else hurt. It goes be­yond the realm of pre­dic­tion into the realm of re­in­force­ment.

And you ask, “Why would cal­lous nat­u­ral se­lec­tion do any­thing that nice?”

It might have got­ten started, maybe, with a mother’s love for her chil­dren, or a brother’s love for a sibling. You can want them to live, you can want them to fed, sure; but if you smile when they smile and wince when they wince, that’s a sim­ple urge that leads you to de­liver help along a broad av­enue, in many walks of life. So long as you’re in the an­ces­tral en­vi­ron­ment, what your rel­a­tives want prob­a­bly has some­thing to do with your rel­a­tives’ re­pro­duc­tive suc­cess—this be­ing an ex­pla­na­tion for the se­lec­tion pres­sure, of course, not a con­scious be­lief.

You may ask, “Why not evolve a more ab­stract de­sire to see cer­tain peo­ple tagged as ‘rel­a­tives’ get what they want, with­out ac­tu­ally feel­ing your­self what they feel?” And I would shrug and re­ply, “Be­cause then there’d have to be a whole defi­ni­tion of ‘want­ing’ and so on. Evolu­tion doesn’t take the elab­o­rate cor­rect op­ti­mal path, it falls up the fit­ness land­scape like wa­ter flow­ing down­hill. The mir­ror­ing-ar­chi­tec­ture was already there, so it was a short step from em­pa­thy to sym­pa­thy, and it got the job done.”

Rel­a­tives—and then re­ciproc­ity; your al­lies in the tribe, those with whom you trade fa­vors. Tit for Tat, or evolu­tion’s elab­o­ra­tion thereof to ac­count for so­cial rep­u­ta­tions.

Who is the most formidable, among the hu­man kind? The strongest? The smartest? More of­ten than ei­ther of these, I think, it is the one who can call upon the most friends.

So how do you make lots of friends?

You could, per­haps, have a spe­cific urge to bring your al­lies food, like a vam­pire bat—they have a whole sys­tem of re­cip­ro­cal blood dona­tions go­ing in those colonies. But it’s a more gen­eral mo­ti­va­tion, that will lead the or­ganism to store up more fa­vors, if you smile when des­ig­nated friends smile.

And what kind of or­ganism will avoid mak­ing its friends an­gry at it, in full gen­er­al­ity? One that winces when they wince.

Of course you also want to be able to kill des­ig­nated Ene­mies with­out a qualm—these are hu­mans we’re talk­ing about.

But… I’m not sure of this, but it does look to me like sym­pa­thy, among hu­mans, is “on” by de­fault. There are cul­tures that help strangers… and cul­tures that eat strangers; the ques­tion is which of these re­quires the ex­plicit im­per­a­tive, and which is the de­fault be­hav­ior for hu­mans. I don’t re­ally think I’m be­ing such a crazy ideal­is­tic fool when I say that, based on my ad­mit­tedly limited knowl­edge of an­thro­pol­ogy, it looks like sym­pa­thy is on by de­fault.

Either way… it’s painful if you’re a by­stan­der in a war be­tween two sides, and your sym­pa­thy has not been switched off for ei­ther side, so that you wince when you see a dead child no mat­ter what the cap­tion on the photo; and yet those two sides have no sym­pa­thy for each other, and they go on kil­ling.

So that is the hu­man idiom of sym­pa­thy —a strange, com­plex, deep im­ple­men­ta­tion of re­ciproc­ity and helping. It tan­gles minds to­gether—not by a term in the util­ity func­tion for some other mind’s “de­sire”, but by the sim­pler and yet far more con­se­quen­tial path of mir­ror neu­rons: feel­ing what the other mind feels, and seek­ing similar states. Even if it’s only done by ob­ser­va­tion and in­fer­ence, and not by di­rect trans­mis­sion of neu­ral in­for­ma­tion as yet.

Em­pa­thy is a hu­man way of pre­dict­ing other minds. It is not the only pos­si­ble way.

The hu­man brain is not quickly rewirable; if you’re sud­denly put into a dark room, you can’t rewire the vi­sual cor­tex as au­di­tory cor­tex, so as to bet­ter pro­cess sounds, un­til you leave, and then sud­denly shift all the neu­rons back to be­ing vi­sual cor­tex again.

An AI, at least one run­ning on any­thing like a mod­ern pro­gram­ming ar­chi­tec­ture, can triv­ially shift com­put­ing re­sources from one thread to an­other. Put in the dark? Shut down vi­sion and de­vote all those op­er­a­tions to sound; swap the old pro­gram to disk to free up the RAM, then swap the disk back in again when the lights go on.

So why would an AI need to force its own mind into a state similar to what it wanted to pre­dict? Just cre­ate a sep­a­rate mind-in­stance—maybe with differ­ent al­gorithms, the bet­ter to simu­late that very dis­similar hu­man. Don’t try to mix up the data with your own mind-state; don’t use mir­ror neu­rons. Think of all the risk and mess that im­plies!

An ex­pected util­ity max­i­mizer—es­pe­cially one that does un­der­stand in­tel­li­gence on an ab­stract level—has other op­tions than em­pa­thy, when it comes to un­der­stand­ing other minds. The agent doesn’t need to put it­self in any­one else’s shoes; it can just model the other mind di­rectly. A hy­poth­e­sis like any other hy­poth­e­sis, just a lit­tle big­ger. You don’t need to be­come your shoes to un­der­stand your shoes.

And sym­pa­thy? Well, sup­pose we’re deal­ing with an ex­pected pa­per­clip max­i­mizer, but one that isn’t yet pow­er­ful enough to have things all its own way—it has to deal with hu­mans to get its pa­per­clips. So the pa­per­clip agent… mod­els those hu­mans as rele­vant parts of the en­vi­ron­ment, mod­els their prob­a­ble re­ac­tions to var­i­ous stim­uli, and does things that will make the hu­mans feel fa­vor­able to­ward it in the fu­ture.

To a pa­per­clip max­i­mizer, the hu­mans are just ma­chines with press­able but­tons. No need to feel what the other feels—if that were even pos­si­ble across such a tremen­dous gap of in­ter­nal ar­chi­tec­ture. How could an ex­pected pa­per­clip max­i­mizer “feel happy” when it saw a hu­man smile? “Hap­piness” is an idiom of policy re­in­force­ment learn­ing, not ex­pected util­ity max­i­miza­tion. A pa­per­clip max­i­mizer doesn’t feel happy when it makes pa­per­clips, it just chooses whichever ac­tion leads to the great­est num­ber of ex­pected pa­per­clips. Though a pa­per­clip max­i­mizer might find it con­ve­nient to dis­play a smile when it made pa­per­clips—so as to help ma­nipu­late any hu­mans that had des­ig­nated it a friend.

You might find it a bit difficult to imag­ine such an al­gorithm—to put your­self into the shoes of some­thing that does not work like you do, and does not work like any mode your brain can make it­self op­er­ate in.

You can make your brain op­er­at­ing in the mode of hat­ing an en­emy, but that’s not right ei­ther. The way to imag­ine how a truly un­sym­pa­thetic mind sees a hu­man, is to imag­ine your­self as a use­ful ma­chine with lev­ers on it. Not a hu­man-shaped ma­chine, be­cause we have in­stincts for that. Just a wood­saw or some­thing. Some lev­ers make the ma­chine out­put coins, other lev­ers might make it fire a bul­let. The ma­chine does have a per­sis­tent in­ter­nal state and you have to pull the lev­ers in the right or­der. Re­gard­less, it’s just a com­pli­cated causal sys­tem—noth­ing in­her­ently men­tal about it.

(To un­der­stand un­sym­pa­thetic op­ti­miza­tion pro­cesses, I would sug­gest study­ing nat­u­ral se­lec­tion, which doesn’t bother to anes­thetize fatally wounded and dy­ing crea­tures, even when their pain no longer serves any re­pro­duc­tive pur­pose, be­cause the anes­thetic would serve no re­pro­duc­tive pur­pose ei­ther.)

That’s why I listed “sym­pa­thy” in front of even “bore­dom” on my list of things that would be re­quired to have aliens which are the least bit, if you’ll par­don the phrase, sym­pa­thetic. It’s not im­pos­si­ble that sym­pa­thy ex­ists among some sig­nifi­cant frac­tion of all evolved alien in­tel­li­gent species; mir­ror neu­rons seem like the sort of thing that, hav­ing hap­pened once, could hap­pen again.

Un­sym­pa­thetic aliens might be trad­ing part­ners—or not, stars and such re­sources are pretty much the same the uni­verse over. We might ne­go­ti­ate treaties with them, and they might keep them for calcu­lated fear of reprisal. We might even co­op­er­ate in the Pri­soner’s Dilemma. But we would never be friends with them. They would never see us as any­thing but means to an end. They would never shed a tear for us, nor smile for our joys. And the oth­ers of their own kind would re­ceive no differ­ent con­sid­er­a­tion, nor have any sense that they were miss­ing some­thing im­por­tant thereby.

Such aliens would be varelse, not ra­men—the sort of aliens we can’t re­late to on any per­sonal level, and no point in try­ing.