Anthropomorphic Optimism

The core fal­lacy of an­thro­po­mor­phism is ex­pect­ing some­thing to be pre­dicted by the black box of your brain, when its ca­sual struc­ture is so differ­ent from that of a hu­man brain, as to give you no li­cense to ex­pect any such thing.

The Tragedy of Group Selec­tion­ism (as pre­vi­ously cov­ered in the evolu­tion se­quence) was a rather ex­treme er­ror by a group of early (pre-1966) biol­o­gists, in­clud­ing Wynne-Ed­wards, Allee, and Br­ere­ton among oth­ers, who be­lieved that preda­tors would vol­un­tar­ily re­strain their breed­ing to avoid over­pop­u­lat­ing their habitat and ex­haust­ing the prey pop­u­la­tion.

The proffered the­ory was that if there were mul­ti­ple, ge­o­graph­i­cally sep­a­rated groups of e.g. foxes, then groups of foxes that best re­strained their breed­ing, would send out colon­ists to re­place crashed pop­u­la­tions. And so, over time, group se­lec­tion would pro­mote re­strained-breed­ing genes in foxes.

I’m not go­ing to re­peat all the prob­lems that de­vel­oped with this sce­nario. Suffice it to say that there was no em­piri­cal ev­i­dence to start with; that no em­piri­cal ev­i­dence was ever un­cov­ered; that, in fact, preda­tor pop­u­la­tions crash all the time; and that for group se­lec­tion pres­sure to over­come a coun­ter­vailing in­di­vi­d­ual se­lec­tion pres­sure, turned out to be very nearly math­e­mat­i­cally im­pos­si­ble.

The the­ory hav­ing turned out to be com­pletely in­cor­rect, we may ask if, per­haps, the origi­na­tors of the the­ory were do­ing some­thing wrong.

“Why be so un­char­i­ta­ble?” you ask. “In ad­vance of do­ing the ex­per­i­ment, how could they know that group se­lec­tion couldn’t over­come in­di­vi­d­ual se­lec­tion?”

But later on, Michael J. Wade went out and ac­tu­ally cre­ated in the lab­o­ra­tory the nigh-im­pos­si­ble con­di­tions for group se­lec­tion. Wade re­peat­edly se­lected in­sect sub­pop­u­la­tions for low pop­u­la­tion num­bers. Did the in­sects evolve to re­strain their breed­ing, and live in quiet peace with enough food for all, as the group se­lec­tion­ists had en­vi­sioned?

No; the adults adapted to can­ni­bal­ize eggs and lar­vae, es­pe­cially fe­male lar­vae.

Of course se­lect­ing for small sub­pop­u­la­tion sizes would not se­lect for in­di­vi­d­u­als who re­strained their own breed­ing. It would se­lect for in­di­vi­d­u­als who ate other in­di­vi­d­u­als’ chil­dren. Espe­cially the girls.

Now, why might the group se­lec­tion­ists have not thought of that pos­si­bil­ity?

Sup­pose you were a mem­ber of a tribe, and you knew that, in the near fu­ture, your tribe would be sub­jected to a re­source squeeze. You might pro­pose, as a solu­tion, that no cou­ple have more than one child—af­ter the first child, the cou­ple goes on birth con­trol. Say­ing, “Let’s all in­di­vi­d­u­ally have as many chil­dren as we can, but then hunt down and can­ni­bal­ize each other’s chil­dren, es­pe­cially the girls,” would not even oc­cur to you as a pos­si­bil­ity.

Think of a prefer­ence or­der­ing over solu­tions, rel­a­tive to your goals. You want a solu­tion as high in this prefer­ence or­der­ing as pos­si­ble. How do you find one? With a brain, of course! Think of your brain as a high-rank­ing-solu­tion-gen­er­a­tor—a search pro­cess that pro­duces solu­tions that rank high in your in­nate prefer­ence or­der­ing.

The solu­tion space on all real-world prob­lems is gen­er­ally fairly large, which is why you need an effi­cient brain that doesn’t even bother to for­mu­late the vast ma­jor­ity of low-rank­ing solu­tions.

If your tribe is faced with a re­source squeeze, you could try hop­ping ev­ery­where on one leg, or chew­ing off your own toes. Th­ese “solu­tions” ob­vi­ously wouldn’t work and would in­cur large costs, as you can see upon ex­am­i­na­tion—but in fact your brain is too effi­cient to waste time con­sid­er­ing such poor solu­tions; it doesn’t gen­er­ate them in the first place. Your brain, in its search for high-rank­ing solu­tions, flies di­rectly to parts of the solu­tion space like “Every­one in the tribe gets to­gether, and agrees to have no more than one child per cou­ple un­til the re­source squeeze is past.”

Such a low-rank­ing solu­tion as “Every­one have as many kids as pos­si­ble, then can­ni­bal­ize the girls” would not be gen­er­ated in your search pro­cess.

But the rank­ing of an op­tion as “low” or “high” is not an in­her­ent prop­erty of the op­tion, it is a prop­erty of the op­ti­miza­tion pro­cess that does the prefer­ring. And differ­ent op­ti­miza­tion pro­cesses will search in differ­ent or­ders.

So far as evolu­tion is con­cerned, in­di­vi­d­u­als re­pro­duc­ing to the ful­lest and then can­ni­bal­iz­ing oth­ers’ daugh­ters, is a no-brainer; whereas in­di­vi­d­u­als vol­un­tar­ily re­strain­ing their own breed­ing for the good of the group, is ab­solutely lu­dicrous. Or to say it less an­thro­po­mor­phi­cally, the first set of alle­les would rapidly re­place the sec­ond in a pop­u­la­tion. (And nat­u­ral se­lec­tion has no ob­vi­ous search or­der here—these two al­ter­na­tives seem around equally sim­ple as mu­ta­tions).

Sup­pose that one of the biol­o­gists had said, “If a preda­tor pop­u­la­tion has only finite re­sources, evolu­tion will craft them to vol­un­tar­ily re­strain their breed­ing—that’s how I’d do it if I were in charge of build­ing preda­tors.” This would be an­thro­po­mor­phism out­right, the lines of rea­son­ing naked and ex­posed: I would do it this way, there­fore I in­fer that evolu­tion will do it this way.

One does oc­ca­sion­ally en­counter the fal­lacy out­right, in my line of work. But sup­pose you say to the one, “An AI will not nec­es­sar­ily work like you do”. Sup­pose you say to this hy­po­thet­i­cal biol­o­gist, “Evolu­tion doesn’t work like you do.” What will the one say in re­sponse? I can tell you a re­ply you will not hear: “Oh my! I didn’t re­al­ize that! One of the steps of my in­fer­ence was in­valid; I will throw away the con­clu­sion and start over from scratch.”

No: what you’ll hear in­stead is a rea­son why any AI has to rea­son the same way as the speaker. Or a rea­son why nat­u­ral se­lec­tion, fol­low­ing en­tirely differ­ent crite­ria of op­ti­miza­tion and us­ing en­tirely differ­ent meth­ods of op­ti­miza­tion, ought to do the same thing that would oc­cur to a hu­man as a good idea.

Hence the elab­o­rate idea that group se­lec­tion would fa­vor preda­tor groups where the in­di­vi­d­u­als vol­un­tar­ily for­sook re­pro­duc­tive op­por­tu­ni­ties.

The group se­lec­tion­ists went just as far astray, in their pre­dic­tions, as some­one com­mit­ting the fal­lacy out­right. Their fi­nal con­clu­sions were the same as if they were as­sum­ing out­right that evolu­tion nec­es­sar­ily thought like them­selves. But they erased what had been writ­ten above the bot­tom line of their ar­gu­ment, with­out eras­ing the ac­tual bot­tom line, and wrote in new ra­tio­nal­iza­tions. Now the fal­la­cious rea­son­ing is dis­guised; the ob­vi­ously flawed step in the in­fer­ence has been hid­den—even though the con­clu­sion re­mains ex­actly the same; and hence, in the real world, ex­actly as wrong.

But why would any sci­en­tist do this? In the end, the data came out against the group se­lec­tion­ists and they were em­bar­rassed.

As I re­marked in Fake Op­ti­miza­tion Cri­te­ria, we hu­mans seem to have evolved an in­stinct for ar­gu­ing that our preferred policy arises from prac­ti­cally any crite­rion of op­ti­miza­tion. Poli­tics was a fea­ture of the an­ces­tral en­vi­ron­ment; we are de­scended from those who ar­gued most per­sua­sively that the tribe’s in­ter­est—not just their own in­ter­est—re­quired that their hated ri­val Uglak be ex­e­cuted. We cer­tainly aren’t de­scended from Uglak, who failed to ar­gue that his tribe’s moral code—not just his own ob­vi­ous self-in­ter­est—re­quired his sur­vival.

And be­cause we can more per­sua­sively ar­gue, for what we hon­estly be­lieve, we have evolved an in­stinct to hon­estly be­lieve that other peo­ple’s goals, and our tribe’s moral code, truly do im­ply that they should do things our way for their benefit.

So the group se­lec­tion­ists, imag­in­ing this beau­tiful pic­ture of preda­tors re­strain­ing their breed­ing, in­stinc­tively ra­tio­nal­ized why nat­u­ral se­lec­tion ought to do things their way, even ac­cord­ing to nat­u­ral se­lec­tion’s own pur­poses. The foxes will be fit­ter if they re­strain their breed­ing! No, re­ally! They’ll even out­breed other foxes who don’t re­strain their breed­ing! Hon­estly!

The prob­lem with try­ing to ar­gue nat­u­ral se­lec­tion into do­ing things your way, is that evolu­tion does not con­tain that which could be moved by your ar­gu­ments. Evolu­tion does not work like you do—not even to the ex­tent of hav­ing any el­e­ment that could listen to or care about your painstak­ing ex­pla­na­tion of why evolu­tion ought to do things your way. Hu­man ar­gu­ments are not even com­men­su­rate with the in­ter­nal struc­ture of nat­u­ral se­lec­tion as an op­ti­miza­tion pro­cess—hu­man ar­gu­ments aren’t used in pro­mot­ing alle­les, as hu­man ar­gu­ments would play a causal role in hu­man poli­tics.

So in­stead of suc­cess­fully per­suad­ing nat­u­ral se­lec­tion to do things their way, the group se­lec­tion­ists were sim­ply em­bar­rassed when re­al­ity came out differ­ently.

There’s a fairly heavy sub­text here about Un­friendly AI.

But the point gen­er­al­izes: this is the prob­lem with op­ti­mistic rea­son­ing in gen­eral. What is op­ti­mism? It is rank­ing the pos­si­bil­ities by your own prefer­ence or­der­ing, and se­lect­ing an out­come high in that prefer­ence or­der­ing, and some­how that out­come ends up as your pre­dic­tion. What kind of elab­o­rate ra­tio­nal­iza­tions were gen­er­ated along the way, is prob­a­bly not so rele­vant as one might fondly be­lieve; look at the cog­ni­tive his­tory and it’s op­ti­mism in, op­ti­mism out. But Na­ture, or what­ever other pro­cess is un­der dis­cus­sion, is not ac­tu­ally, causally choos­ing be­tween out­comes by rank­ing them in your prefer­ence or­der­ing and pick­ing a high one. So the brain fails to syn­chro­nize with the en­vi­ron­ment, and the pre­dic­tion fails to match re­al­ity.