Evolution, bias and global risk

Some­times we make a de­ci­sion in a way which is differ­ent to how we think we should make a de­ci­sion. When this hap­pens, we call it a bias.

When put this way, the first thing that springs to mind is that differ­ent peo­ple might dis­agree on whether some­thing is ac­tu­ally a bias. Take the by­stan­der effect. If you’re of the opinion that other peo­ple are way less im­por­tant than your­self, then the abil­ity to calmly stand around not do­ing any­thing while some­one else is in dan­ger would be seen as a good thing. You’d in­stead be con­fused by the non-by­stan­der effect, whereby peo­ple (when sep­a­rated from the crowd) ir­ra­tionally put them­selves in dan­ger in or­der to help com­plete strangers.

The sec­ond thing that springs to mind is that the bias may ex­ist for an evolu­tion­ary rea­son, and not just be due to bad brain ar­chi­tec­ture. Re­mem­ber that evolu­tion doesn’t always pro­duce the be­hav­ior that makes the most in­tu­itive sense. Crea­tures, in­clud­ing pre­sum­ably hu­mans, tend to act in a way as to max­i­mize their re­pro­duc­tive suc­cess; they don’t act in the way that nec­es­sar­ily makes the most in­tu­itive sense.

The state­ment that hu­mans act in a fit­ness-max­i­miz­ing way is con­tro­ver­sial. Firstly, we are adapted to our an­ces­tral en­vi­ron­ment, not our cur­rent one. It seems very likely that we’re not well adapted to the ready availa­bil­ity of high-calorie food, for ex­am­ple. But this ar­gu­ment doesn’t ap­ply to ev­ery­thing. A lot of the bi­ases ap­pear to de­scribe situ­a­tions which would ex­ist in both the an­ces­tral and mod­ern wor­lds.

A sec­ond ar­gu­ment is that a lot of our be­hav­ior is gov­erned by memes these days, not genes. It’s cer­tain that the memes that sur­vive are the ones which best re­pro­duce them­selves; it’s also pretty plau­si­ble that ex­po­sure to memes can tip us from one fit­ness-max­i­miz­ing be­hav­ioral strat­egy to an­other. But memes forc­ing us to adopt a highly sub­op­ti­mal strat­egy? I’m scep­ti­cal. It seems like there would be strong se­lec­tion pres­sure against it; to pass the memes on but not let them af­fect our be­hav­ior sig­nifi­cantly. Memes ex­isted in our an­ces­tral en­vi­ron­ments too.

And re­mem­ber that just be­cause you’re be­hav­ing in a way that max­i­mizes your ex­pected re­pro­duc­tive fit­ness, there’s no rea­son to ex­pect you to be con­sciously aware of this fact.

So let’s pre­tend, for the sake of sim­plic­ity, that we’re all act­ing to max­i­mize our ex­pected re­pro­duc­tive suc­cess (and all the things that we know lead to it, such as sta­tus and sig­nal­ling and stuff). Which of the bi­ases might be ex­plained away?

The by­stan­der effect

Eliezer points out:

We could be cyn­i­cal and sug­gest that peo­ple are mostly in­ter­ested in not be­ing blamed for not helping, rather than hav­ing any pos­i­tive de­sire to help—that they mainly wish to es­cape an­tihero­ism and pos­si­ble re­tri­bu­tion.

He lists two prob­lems with this hy­poth­e­sis. Firstly, that the ex­per­i­men­tal setup ap­peared to pre­sent a self­ish threat to the sub­jects. This I have no con­vinc­ing an­swer to. Per­haps peo­ple re­ally are just stupid when it comes to fires, not recog­nis­ing the risk to them­selves, or per­haps this is a gap­ing hole in my the­ory.

The other crit­i­cism is more in­ter­est­ing. Tel­ling peo­ple about the by­stan­der effect makes it less likely to hap­pen? Well, un­der this hy­poth­e­sis, of course it would. The key to not be­ing blamed is to for­mu­late a plau­si­ble ex­pla­na­tion; the ex­pla­na­tion “I didn’t do any­thing be­cause no-one else did ei­ther” sud­denly sounds a lot less plau­si­ble when you know about the by­stan­der effect. (And if you know about it, the per­son you’re ex­plain­ing it to is more likely to as well. We share memes with our friends).

The af­fect heuristic

This one seems quite com­pli­cated and sub­tle, and I think there may be more than one effect go­ing on here. But one class of pos­i­tive-af­fect bias can be es­sen­tially de­scribed as: phras­ing an iden­ti­cal de­ci­sion in more pos­i­tive lan­guage makes peo­ple more likely to choose it. The ex­am­ple given is “sav­ing 150 lives” ver­sus “sav­ing 98% of 150 lives”. (OK these aren’t quite iden­ti­cal de­ci­sions, but the differ­ence in opinion is more than 2% and goes in the wrong di­rec­tion). Ap­par­ently putting in the word 98% makes it sound more pos­i­tive to most peo­ple.

This also seems to make sense if we view it as try­ing to make a jus­tifi­able de­ci­sion, rather than a cor­rect one. Re­mem­ber, the 150(ish) lives we’re sav­ing aren’t our own; there’s no se­lec­tive pres­sure to make the cor­rect de­ci­sion, just one that won’t land us in trou­ble.

The key here is that jus­tify­ing de­ci­sions is hard, es­pe­cially when we might be faced with an op­po­nent more skil­led in rhetoric than our­selves. So we are ea­ger for ad­di­tional rhetoric to be sup­plied which will help us jus­tify the de­ci­sion we want to make. If I had to jus­tify sav­ing 150 lives (at some cost), it would hon­estly never have oc­curred to me to phrase it as “98% of 153 lives”. Even if it had, I’d feel like I was be­ing sneaky and ma­nipu­la­tive, and I might ac­ci­den­tally re­veal that. But to have the sneaky rhetoric sup­plied to me by an out­side au­thor­ity, that makes it a lot eas­ier.

This im­plies a pre­dic­tion: when asked to jus­tify their de­ci­sion, peo­ple who have suc­cumbed to pos­i­tive-af­fect bias will re­peat the pos­tive-af­fec­tive lan­guage they have been sup­plied, pos­si­bly ver­ba­tim. I’m sure you’ve met peo­ple who quote talk­ing points ver­ba­tim from their fa­vorite poli­ti­cal TV show; you might as­sume the TV is do­ing their think­ing for them. I would ar­gue in­stead that it’s do­ing their jus­tifi­ca­tion for them.

Trol­ley problems

There is a class of peo­ple, who I will call non-push­ers, who:

  • would flick a switch if it would cause a train to run over (and kill) one per­son in­stead of five, yet

  • would not push a fat man in front of that train (kil­ling him) if it could save the five lives

So what’s go­ing on here? Our feel­ing of should­ness is pre­sum­ably how so­cial pres­sure feels from the in­side. What we con­sider right is (un­less we’ve trained our­selves oth­er­wise) likely to be what will get us into the least trou­ble. So why do non-push­ers get into less trou­ble than push­ers, if push­ers are bet­ter at sav­ing lives?

It seems pretty ob­vi­ous to me. The push­ers might be more al­tru­is­tic in some vague sense, but they’re not the sort of per­son you’d want to be around. Stand too close to them on a bridge and they might push you off. Bet­ter to steer clear. (The peo­ple who are tied to the tracks pre­sum­ably pre­fer push­ers, but they don’t get any choice in the mat­ter). This might be what we mean by near and far in this con­text.

Another way of putting it is that if you start valu­ing all lives equally, and not put those clos­est to you first, then you might start defect­ing in games of re­cip­ro­cal al­tru­ism. Utili­tar­i­ans ap­pear cold and un­friendly be­cause they’re less wor­ried about you and more wor­ried about what’s go­ing on in some dis­tant, im­pov­er­ished na­tion. They will start to lose the re­pro­duc­tive benefits of re­cip­ro­cal al­tru­ism and so­cial­is­ing.

Global risk

In Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks, Eliezer lists a num­ber of bi­ases which could be re­spon­si­ble for peo­ple’s un­der­es­ti­ma­tion of global risks. There seem to be a lot of them. But I think that from an evolu­tion­ary per­spec­tive, they can all be wrapped up into one.

Group Selec­tion doesn’t work. Evolu­tion re­wards ac­tions which profit the in­di­vi­d­ual (and its kin) rel­a­tive to oth­ers. Some­thing which benefits the en­tire group is nice and all that, but it’ll in­crease the fre­quency of the com­peti­tors of your genes as much as it will your own.

It would be all to easy to say that we can­not in­stinc­tively un­der­stand ex­is­ten­tial risk be­cause our an­ces­tors have, by defi­ni­tion, never ex­pe­rienced any­thing like it. But I think that’s an over-sim­plifi­ca­tion. Some of our an­ces­tors prob­a­bly have sur­vived the col­lapse of so­cieties, but they didn’t do it by pre­vent­ing the so­ciety from col­laps­ing. They did it by in­di­vi­d­u­ally sur­viv­ing the col­lapse or by run­ning away.

But if a brave an­ces­tor had saved a so­ciety from col­lapse, wouldn’t he (or to some ex­tent, she) be­come an in­stant hero with all the re­pro­duc­tive ad­van­tage that af­fords? That would cer­tainly be nice, but I’m not sure the ev­i­dence backs it up. Stanis­lav Petrov was given the cold shoulder. Lead­ing cli­mate sci­en­tists are given a rough time, es­pe­cially when they try and see their be­liefs turned into mean­ingful ac­tion. Even Win­ston Churchill be­came un­pop­u­lar af­ter he helped save demo­cratic civ­i­liza­tion.

I don’t know what the evolu­tion­ary rea­son for hero-in­differ­ence would be, but if it’s real then it pretty much puts the nail in the coffin for civ­i­liza­tion-sav­ing as a re­pro­duc­tive strat­egy. And that means there’s no evolu­tion­ary rea­son to take global risks se­ri­ously, or to act on our con­cerns if we do.

And if we make most of our de­ci­sions on in­stinct—on what feels right—then that’s pretty scary.