The Gift We Give To Tomorrow

How, oh how, did an un­lov­ing and mind­less uni­verse, cough up minds who were ca­pa­ble of love?

“No mys­tery in that,” you say, “it’s just a mat­ter of nat­u­ral se­lec­tion.”

But nat­u­ral se­lec­tion is cruel, bloody, and bloody stupid. Even when, on the sur­face of things, biolog­i­cal or­ganisms aren’t di­rectly fight­ing each other—aren’t di­rectly tear­ing at each other with claws—there’s still a deeper com­pe­ti­tion go­ing on be­tween the genes. Ge­netic in­for­ma­tion is cre­ated when genes in­crease their rel­a­tive fre­quency in the next gen­er­a­tion—what mat­ters for “ge­netic fit­ness” is not how many chil­dren you have, but that you have more chil­dren than oth­ers. It is quite pos­si­ble for a species to evolve to ex­tinc­tion, if the win­ning genes are play­ing nega­tive-sum games.

How, oh how, could such a pro­cess cre­ate be­ings ca­pa­ble of love?

“No mys­tery,” you say, “there is never any mys­tery-in-the-world; mys­tery is a prop­erty of ques­tions, not an­swers. A mother’s chil­dren share her genes, so the mother loves her chil­dren.”

But some­times moth­ers adopt chil­dren, and still love them. And moth­ers love their chil­dren for them­selves, not for their genes.

“No mys­tery,” you say, “In­di­vi­d­ual or­ganisms are adap­ta­tion-ex­e­cuters, not fit­ness-max­i­miz­ers. Evolu­tion­ary psy­chol­ogy is not about de­liber­ately max­i­miz­ing fit­ness—through most of hu­man his­tory, we didn’t know genes ex­isted. We don’t calcu­late our acts’ effect on ge­netic fit­ness con­sciously, or even sub­con­sciously.”

But hu­man be­ings form friend­ships even with non-rel­a­tives: how, oh how, can it be?

“No mys­tery, for hunter-gath­er­ers of­ten play Iter­ated Pri­soner’s Dilem­mas, the solu­tion to which is re­cip­ro­cal al­tru­ism. Some­times the most dan­ger­ous hu­man in the tribe is not the strongest, the pret­tiest, or even the smartest, but the one who has the most al­lies.”

Yet not all friends are fair-weather friends; we have a con­cept of true friend­ship—and some peo­ple have sac­ri­ficed their life for their friends. Would not such a de­vo­tion tend to re­move it­self from the gene pool?

“You said it your­self: we have a con­cept of true friend­ship and fair-weather friend­ship. We can tell, or try to tell, the differ­ence be­tween some­one who con­sid­ers us a valuable ally, and some­one ex­e­cut­ing the friend­ship adap­ta­tion. We wouldn’t be true friends with some­one who we didn’t think was a true friend to us—and some­one with many true friends is far more formidable than some­one with many fair-weather al­lies.”

And Mo­han­das Gandhi, who re­ally did turn the other cheek? Those who try to serve all hu­man­ity, whether or not all hu­man­ity serves them in turn?

“That per­haps is a more com­pli­cated story. Hu­man be­ings are not just so­cial an­i­mals. We are poli­ti­cal an­i­mals who ar­gue lin­guis­ti­cally about policy in adap­tive tribal con­texts. Some­times the formidable hu­man is not the strongest, but the one who can most skil­lfully ar­gue that their preferred poli­cies match the prefer­ences of oth­ers.”

Um… that doesn’t ex­plain Gandhi, or am I miss­ing some­thing?

“The point is that we have the abil­ity to ar­gue about ‘What should be done?’ as a propo­si­tion—we can make those ar­gu­ments and re­spond to those ar­gu­ments, with­out which poli­tics could not take place.”

Okay, but Gandhi?

“Believed cer­tain com­pli­cated propo­si­tions about ‘What should be done?’ and did them.”

That sounds like it could ex­plain any pos­si­ble hu­man be­hav­ior.

“If we traced back the chain of causal­ity through all the ar­gu­ments, it would in­volve: a moral ar­chi­tec­ture that had the abil­ity to ar­gue gen­eral ab­stract moral propo­si­tions like ‘What should be done to peo­ple?‘; ap­peal to hard­wired in­tu­itions like fair­ness, a con­cept of duty, pain aver­sion + em­pa­thy; some­thing like a prefer­ence for sim­ple moral propo­si­tions, prob­a­bly reused from our pre­vi­ous Oc­cam prior; and the end re­sult of all this, plus per­haps memetic se­lec­tion effects, was ‘You should not hurt peo­ple’ in full gen­er­al­ity—”

And that gets you Gandhi.

“Un­less you think it was magic, it has to fit into the lawful causal de­vel­op­ment of the uni­verse some­how.”

Well… I cer­tainly won’t pos­tu­late magic, un­der any name.

“Good.”

But come on… doesn’t it seem a lit­tle… amaz­ing… that hun­dreds of mil­lions of years worth of evolu­tion’s death tour­na­ment could cough up moth­ers and fathers, sisters and broth­ers, hus­bands and wives, stead­fast friends and hon­or­able en­e­mies, true al­tru­ists and guardians of causes, po­lice officers and loyal defen­ders, even artists sac­ri­fic­ing them­selves for their art, all prac­tic­ing so many kinds of love? For so many things other than genes? Do­ing their part to make their world less ugly, some­thing be­sides a sea of blood and vi­o­lence and mind­less repli­ca­tion?

“Are you claiming to be sur­prised by this? If so, ques­tion your un­der­ly­ing model, for it has led you to be sur­prised by the true state of af­fairs. Since the be­gin­ning, not one un­usual thing has ever hap­pened.”

But how is it not sur­pris­ing?

“What are you sug­gest­ing, that some sort of shad­owy figure stood be­hind the scenes and di­rected evolu­tion?”

Hell no. But—

“Be­cause if you were sug­gest­ing that, I would have to ask how that shad­owy figure origi­nally de­cided that love was a de­sir­able out­come of evolu­tion. I would have to ask where that figure got prefer­ences that in­cluded things like love, friend­ship, loy­alty, fair­ness, honor, ro­mance, and so on. On evolu­tion­ary psy­chol­ogy, we can see how that spe­cific out­come came about—how those par­tic­u­lar goals rather than oth­ers were gen­er­ated in the first place. You can call it ‘sur­pris­ing’ all you like. But when you re­ally do un­der­stand evolu­tion­ary psy­chol­ogy, you can see how parental love and ro­mance and honor, and even true al­tru­ism and moral ar­gu­ments, bear the spe­cific de­sign sig­na­ture of nat­u­ral se­lec­tion in par­tic­u­lar adap­tive con­texts of the hunter-gath­erer sa­vanna. So if there was a shad­owy figure, it must it­self have evolved—and that ob­vi­ates the whole point of pos­tu­lat­ing it.”

I’m not pos­tu­lat­ing a shad­owy figure! I’m just ask­ing how hu­man be­ings ended up so nice.

Nice! Have you looked at this planet lately? We also bear all those other emo­tions that evolved, too—which would tell you very well that we evolved, should you be­gin to doubt it. Hu­mans aren’t always nice.”

We’re one hell of a lot nicer than the pro­cess that pro­duced us, which lets elephants starve to death when they run out of teeth, and doesn’t anes­thetize a gazelle even as it lays dy­ing and is of no fur­ther im­por­tance to evolu­tion one way or the other. It doesn’t take much to be nicer than evolu­tion. To have the the­o­ret­i­cal ca­pac­ity to make one sin­gle ges­ture of mercy, to feel a sin­gle twinge of em­pa­thy, is to be nicer than evolu­tion. How did evolu­tion, which is it­self so un­car­ing, cre­ate minds on that qual­i­ta­tively higher moral level than it­self? How did evolu­tion, which is so ugly, end up do­ing any­thing so beau­tiful?

“Beau­tiful, you say? Bach’s Lit­tle Fugue in G Minor may be beau­tiful, but the sound waves, as they travel through the air, are not stamped with tiny tags to spec­ify their beauty. If you wish to find ex­plic­itly en­coded a mea­sure of the fugue’s beauty, you will have to look at a hu­man brain—nowhere else in the uni­verse will you find it. Not upon the seas or the moun­tains will you find such judg­ments writ­ten: they are not minds, they can­not think.”

Per­haps that is so, but still I ask: How did evolu­tion end up do­ing any­thing so beau­tiful, as giv­ing us the abil­ity to ad­mire the beauty of a flower?

“Can you not see the cir­cu­lar­ity in your ques­tion? If beauty were like some great light in the sky that shined from out­side hu­mans, then your ques­tion might make sense—though there would still be the ques­tion of how hu­mans came to per­ceive that light. You evolved with a psy­chol­ogy un­like evolu­tion: Evolu­tion has noth­ing like the in­tel­li­gence or the pre­ci­sion re­quired to ex­actly quine its goal sys­tem. In cough­ing up the first true minds, evolu­tion’s sim­ple fit­ness crite­rion shat­tered into a thou­sand val­ues. You evolved with a psy­chol­ogy that at­taches util­ity to things which evolu­tion does not care about, like hu­man life and hap­piness. And then you look back and say, ‘How mar­velous, that un­car­ing evolu­tion pro­duced minds that care about sen­tient life!’ So your great mar­vel and won­der, that seems like far too much co­in­ci­dence, is re­ally no co­in­ci­dence at all.”

But then it is still amaz­ing that this par­tic­u­lar cir­cu­lar loop, hap­pened to loop around such im­por­tant things as beauty and al­tru­ism.

“I don’t think you’re fol­low­ing me here. To you, it seems nat­u­ral to priv­ilege the beauty and al­tru­ism as spe­cial, as preferred, be­cause you value them highly; and you don’t see this as a un­usual fact about your­self, be­cause many of your friends do like­wise. So you ex­pect that a ghost of perfect empti­ness would also value life and hap­piness—and then, from this stand­point out­side re­al­ity, a great co­in­ci­dence would in­deed have oc­curred.”

But you can make ar­gu­ments for the im­por­tance of beauty and al­tru­ism from first prin­ci­ples—that our aes­thetic senses lead us to cre­ate new com­plex­ity, in­stead of re­peat­ing the same things over and over; and that al­tru­ism is im­por­tant be­cause it takes us out­side our­selves, gives our life a higher mean­ing than sheer brute self­ish­ness.

“Oh, and that ar­gu­ment is go­ing to move even a ghost of perfect empti­ness—now that you’ve ap­pealed to slightly differ­ent val­ues? Those aren’t first prin­ci­ples, they’re just differ­ent prin­ci­ples. Even if you’ve adopted a high-falutin’ philo­soph­i­cal tone, still there are no uni­ver­sally com­pel­ling ar­gu­ments. All you’ve done is pass the re­cur­sive buck.”

You don’t think that, some­how, we evolved to tap into some­thing be­yond—

“What good does it do to sup­pose some­thing be­yond? Why should we pay more at­ten­tion to that be­yond thing, than we pay to our ex­is­tence as hu­mans? How does it al­ter your per­sonal re­spon­si­bil­ity, to say that you were only fol­low­ing the or­ders of the be­yond thing? And you would still have evolved to let the be­yond thing, rather than some­thing else, di­rect your ac­tions. You are only pass­ing the re­cur­sive buck. Above all, it would be too much co­in­ci­dence.

Too much co­in­ci­dence?

“A flower is beau­tiful, you say. Do you think there is no story be­hind that beauty, or that sci­ence does not know the story? Flower pol­len is trans­mit­ted by bees, so by sex­ual se­lec­tion, flow­ers evolved to at­tract bees—by imi­tat­ing cer­tain mat­ing signs of bees, as it hap­pened; the flow­ers’ pat­terns would look more in­tri­cate, if you could see in the ul­tra­vi­o­let. Now healthy flow­ers are a sign of fer­tile land, likely to bear fruits and other trea­sures, and prob­a­bly prey an­i­mals as well; so is it any won­der that hu­mans evolved to be at­tracted to flow­ers? But for there to be some great light writ­ten upon the very stars—those huge un­sen­tient balls of burn­ing hy­dro­gen—which also said that flow­ers were beau­tiful, now that would be far too much co­in­ci­dence.”

So you ex­plain away the beauty of a flower?

“No, I ex­plain it. Of course there’s a story be­hind the beauty of flow­ers and the fact that we find them beau­tiful. Be­hind or­dered events, one finds or­dered sto­ries; and what has no story is the product of ran­dom noise, which is hardly any bet­ter. If you can­not take joy in things that have sto­ries be­hind them, your life will be empty in­deed. I don’t think I take any less joy in a flower than you do; more so, per­haps, be­cause I take joy in its story as well.”

Per­haps as you say, there is no sur­prise from a causal view­point—no dis­rup­tion of the phys­i­cal or­der of the uni­verse. But it still seems to me that, in this cre­ation of hu­mans by evolu­tion, some­thing hap­pened that is pre­cious and mar­velous and won­der­ful. If we can­not call it a phys­i­cal mir­a­cle, then call it a moral mir­a­cle.

“Be­cause it’s only a mir­a­cle from the per­spec­tive of the moral­ity that was pro­duced, thus ex­plain­ing away all of the ap­par­ent co­in­ci­dence from a merely causal and phys­i­cal per­spec­tive?”

Well… I sup­pose you could in­ter­pret the term that way, yes. I just meant some­thing that was im­mensely sur­pris­ing and won­der­ful on a moral level, even if it is not sur­pris­ing on a phys­i­cal level.

“I think that’s what I said.”

But it still seems to me that you, from your own view, drain some­thing of that won­der away.

“Then you have prob­lems tak­ing joy in the merely real. Love has to be­gin some­how, it has to en­ter the uni­verse some­where. It is like ask­ing how life it­self be­gins—and though you were born of your father and mother, and they arose from their liv­ing par­ents in turn, if you go far and far and far away back, you will fi­nally come to a repli­ca­tor that arose by pure ac­ci­dent—the bor­der be­tween life and un­life. So too with love.

“A com­plex pat­tern must be ex­plained by a cause which is not already that com­plex pat­tern. Not just the event must be ex­plained, but the very shape and form. For love to first en­ter Time, it must come of some­thing that is not love; if this were not pos­si­ble, then love could not be.

“Even as life it­self re­quired that first repli­ca­tor to come about by ac­ci­dent, par­entless but still caused: far, far back in the causal chain that led to you: 3.85 billion years ago, in some lit­tle tidal pool.

“Per­haps your chil­dren’s chil­dren will ask how it is that they are ca­pa­ble of love.

“And their par­ents will say: Be­cause we, who also love, cre­ated you to love.

“And your chil­dren’s chil­dren will ask: But how is it that you love?

“And their par­ents will re­ply: Be­cause our own par­ents, who also loved, cre­ated us to love in turn.

“Then your chil­dren’s chil­dren will ask: But where did it all be­gin? Where does the re­cur­sion end?

“And their par­ents will say: Once upon a time, long ago and far away, ever so long ago, there were in­tel­li­gent be­ings who were not them­selves in­tel­li­gently de­signed. Once upon a time, there were lovers cre­ated by some­thing that did not love.

“Once upon a time, when all of civ­i­liza­tion was a sin­gle galaxy and a sin­gle star: and a sin­gle planet, a place called Earth.

“Long ago, and far away, ever so long ago.”