Growing Up is Hard

Ter­rence Dea­con’s The Sym­bolic Species is the best book I’ve ever read on the evolu­tion of in­tel­li­gence. Dea­con some­what over­reaches when he tries to the­o­rize about what our X-fac­tor is; but his ex­po­si­tion of its evolu­tion is first-class.

Dea­con makes an ex­cel­lent case—he has quite per­suaded me—that the in­creased rel­a­tive size of our frontal cor­tex, com­pared to other ho­minids, is of over­whelming im­por­tance in un­der­stand­ing the evolu­tion­ary de­vel­op­ment of hu­man­ity. It’s not just a ques­tion of in­creased com­put­ing ca­pac­ity, like adding ex­tra pro­ces­sors onto a cluster; it’s a ques­tion of what kind of sig­nals dom­i­nate, in the brain.

Peo­ple with Willi­ams Syn­drome (caused by dele­tion of a cer­tain re­gion on chro­mo­some 7) are hy­per­so­cial, ul­tra-gre­gar­i­ous; as chil­dren they fail to show a nor­mal fear of adult strangers. WSers are cog­ni­tively im­paired on most di­men­sions, but their ver­bal abil­ities are spared or even ex­ag­ger­ated; they of­ten speak early, with com­plex sen­tences and large vo­cab­u­lary, and ex­cel­lent ver­bal re­call, even if they can never learn to do ba­sic ar­ith­metic.

Dea­con makes a case for some Willi­ams Syn­drome symp­toms com­ing from a frontal cor­tex that is rel­a­tively too large for a hu­man, with the re­sult that pre­frontal sig­nals—in­clud­ing cer­tain so­cial emo­tions—dom­i­nate more than they should.

“Both post­mortem anal­y­sis and MRI anal­y­sis have re­vealed brains with a re­duc­tion of the en­tire pos­te­rior cere­bral cor­tex, but a spar­ing of the cere­bel­lum and frontal lobes, and per­haps even an ex­ag­ger­a­tion of cere­bel­lar size,” says Dea­con.

Willi­ams Syn­drome’s defic­its can be ex­plained by the shrunken pos­te­rior cor­tex—they can’t solve sim­ple prob­lems in­volv­ing shapes, be­cause the pari­etal cor­tex, which han­dles shape-pro­cess­ing, is diminished. But the frontal cor­tex is not ac­tu­ally en­larged; it is sim­ply spared. So where do WSers’ aug­mented ver­bal abil­ities come from?

Per­haps be­cause the sig­nals sent out by the frontal cor­tex, say­ing “pay at­ten­tion to this ver­bal stuff!”, win out over sig­nals com­ing from the shrunken sec­tions of the brain. So the ver­bal abil­ities get lots of ex­er­cise—and other abil­ities don’t.

Similarly with the hy­per-gre­gar­i­ous na­ture of WSers; the sig­nal say­ing “Pay at­ten­tion to this per­son!”, origi­nat­ing in the frontal ar­eas where so­cial pro­cess­ing gets done, dom­i­nates the emo­tional land­scape.

And Willi­ams Syn­drome is not frontal en­large­ment, re­mem­ber; it’s just frontal spar­ing in an oth­er­wise shrunken brain, which in­creases the rel­a­tive force of frontal sig­nals...

...be­yond the nar­row pa­ram­e­ters within which a hu­man brain is adapted to work.

I men­tion this be­cause you might look at the his­tory of hu­man evolu­tion, and think to your­self, “Hm… to get from a chim­panzee to a hu­man… you en­large the frontal cor­tex… so if we en­large it even fur­ther...

The road to +Hu­man is not that sim­ple.

Ho­minid brains have been tested billions of times over through thou­sands of gen­er­a­tions. But you shouldn’t rea­son qual­i­ta­tively, “Test­ing cre­ates ‘ro­bust­ness’, so now the hu­man brain must be ‘ex­tremely ro­bust’.” Sure, we can ex­pect the hu­man brain to be ro­bust against some in­sults, like the loss of a sin­gle neu­ron. But test­ing in an evolu­tion­ary paradigm only cre­ates ro­bust­ness over the do­main tested. Yes, some­times you get ro­bust­ness be­yond that, be­cause some­times evolu­tion finds sim­ple solu­tions that prove to gen­er­al­ize—

But peo­ple do go crazy. Not col­lo­quial crazy, ac­tual crazy. Some or­di­nary young man in col­lege sud­denly de­cides that ev­ery­one around them is star­ing at them be­cause they’re part of the con­spir­acy. (I saw that hap­pen once, and made a clas­sic non-Bayesian mis­take; I knew that this was archety­pal schizophrenic be­hav­ior, but I didn’t re­al­ize that similar symp­toms can arise from many other causes. Psy­chosis, it turns out, is a gen­eral failure mode, “the fever of CNS ill­nesses”; it can also be caused by drugs, brain tu­mors, or just sleep de­pri­va­tion. I saw the perfect fit to what I’d read of schizophre­nia, and didn’t ask “What if other things fit just as perfectly?” So my snap di­ag­no­sis of schizophre­nia turned out to be wrong; but as I wasn’t fool­ish enough to try to han­dle the case my­self, things turned out all right in the end.)

Wikipe­dia says that the cur­rent main hy­pothe­ses be­ing con­sid­ered for psy­chosis are (a) too much dopamine in one place (b) not enough glu­ta­mate some­where else. (I thought I re­mem­bered hear­ing about sero­tonin im­bal­ances, but maybe that was some­thing else.)

That’s how ro­bust the hu­man brain is: a gen­tle lit­tle neu­ro­trans­mit­ter im­bal­ance—so sub­tle they’re still hav­ing trou­ble track­ing it down af­ter who knows how many fMRI stud­ies—can give you a full-blown case of stark rav­ing mad.

I don’t know how of­ten psy­chosis hap­pens to hunter-gath­er­ers, so maybe it has some­thing to do with a mod­ern diet? We’re not get­ting ex­actly the right ra­tio of Omega 6 to Omega 3 fats, or we’re eat­ing too much pro­cessed sugar, or some­thing. And among the many other things that go hay­wire with the metabolism as a re­sult, the brain moves into a more frag­ile state that breaks down more eas­ily...

Or what­ever. That’s just a ran­dom hy­poth­e­sis. By which I mean to say: The brain re­ally is adapted to a very nar­row range of op­er­at­ing pa­ram­e­ters. It doesn’t tol­er­ate a lit­tle too much dopamine, just as your metabolism isn’t very ro­bust against non-an­ces­tral ra­tios of Omega 6 to Omega 3. Yes, some­times you get bonus ro­bust­ness in a new do­main, when evolu­tion solves W, X, and Y us­ing a com­pact adap­ta­tion that also ex­tends to novel Z. Other times… quite of­ten, re­ally… Z just isn’t cov­ered.

Often, you step out­side the box of the an­ces­tral pa­ram­e­ter ranges, and things just plain break.

Every part of your brain as­sumes that all the other sur­round­ing parts work a cer­tain way. The pre­sent brain is the En­vi­ron­ment of Evolu­tion­ary Adapt­ed­ness for ev­ery in­di­vi­d­ual piece of the pre­sent brain.

Start mod­ify­ing the pieces in ways that seem like “good ideas”—mak­ing the frontal cor­tex larger, for ex­am­ple—and you start op­er­at­ing out­side the an­ces­tral box of pa­ram­e­ter ranges. And then ev­ery­thing goes to hell. Why shouldn’t it? Why would the brain be de­signed for easy upgrad­abil­ity?

Even if one change works—will the sec­ond? Will the third? Will all four changes work well to­gether? Will the fifth change have all that greater a prob­a­bil­ity of break­ing some­thing, be­cause you’re already op­er­at­ing that much fur­ther out­side the an­ces­tral box? Will the sixth change prove that you ex­hausted all the brain’s ro­bust­ness in tol­er­at­ing the changes you made already, and now there’s no adap­tivity left?

Poetry aside, a hu­man be­ing isn’t the seed of a god. We don’t have neat lit­tle di­als that you can eas­ily tweak to more “ad­vanced” set­tings. We are not de­signed for our parts to be up­graded. Our parts are adapted to work ex­actly as they are, in their cur­rent con­text, ev­ery part tested in a regime of the other parts be­ing the way they are. Idiot evolu­tion does not look ahead, it does not de­sign with the in­tent of differ­ent fu­ture uses. We are not de­signed to un­fold into some­thing big­ger.

Which is not to say that it could never, ever be done.

You could build a mod­u­lar, cleanly de­signed AI that could make a billion se­quen­tial up­grades to it­self us­ing de­ter­minis­tic guaran­tees of cor­rect­ness. A Friendly AI pro­gram­mer could do even more ar­cane things to make sure the AI knew what you would-want if you un­der­stood the pos­si­bil­ities. And then the AI could ap­ply su­pe­rior in­tel­li­gence to un­tan­gle the pat­tern of all those neu­rons (with­out simu­lat­ing you in such fine de­tail as to cre­ate a new per­son), and to fore­see the con­se­quences of its acts, and to un­der­stand the mean­ing of those con­se­quences un­der your val­ues. And the AI could up­grade one thing while si­mul­ta­neously tweak­ing the five things that de­pend on it and the twenty things that de­pend on them. Find­ing a grad­ual, in­cre­men­tal path to greater in­tel­li­gence (so as not to effec­tively erase you and re­place you with some­one else) that didn’t drive you psy­chotic or give you Willi­ams Syn­drome or a hun­dred other syn­dromes.

Or you could walk the path of unas­sisted hu­man en­hance­ment, try­ing to make changes to your­self with­out un­der­stand­ing them fully. Some­times chang­ing your­self the wrong way, and be­ing mur­dered or sus­pended to disk, and re­placed by an ear­lier backup. Rac­ing against the clock, try­ing to raise your in­tel­li­gence with­out break­ing your brain or mu­tat­ing your will. Hop­ing you be­came suffi­ciently su­per-smart that you could im­prove the skill with which you mod­ified your­self. Be­fore your hacked brain moved so far out­side an­ces­tral pa­ram­e­ters and tol­er­ated so many in­sults that its frag­ility reached a limit, and you fell to pieces with ev­ery new at­tempted mod­ifi­ca­tion be­yond that. Death is far from the worst risk here. Not ev­ery form of mad­ness will ap­pear im­me­di­ately when you branch your­self for test­ing—some in­san­i­ties might in­cu­bate for a while be­fore they be­came visi­ble. And you might not no­tice if your goals shifted only a bit at a time, as your emo­tional bal­ance al­tered with the strange new har­monies of your brain.

Each path has its lit­tle up­sides and down­sides. (E.g: AI re­quires supreme pre­cise knowl­edge; hu­man up­grad­ing has a nonzero prob­a­bil­ity of suc­cess through trial and er­ror. Malfunc­tion­ing AIs mostly kill you and tile the galaxy with smiley faces; hu­man up­grad­ing might pro­duce in­sane gods to rule over you in Hell for­ever. Or so my cur­rent un­der­stand­ing would pre­dict, any­way; it’s not like I’ve ob­served any of this as a fact.)

And I’m sorry to dis­miss such a gi­gan­tic dilemma with three para­graphs, but it wan­ders from the point of to­day’s post:

The point of to­day’s post is that grow­ing up—or even de­cid­ing what you want to be when you grow up—is as around as hard as de­sign­ing a new in­tel­li­gent species. Harder, since you’re con­strained to start from the base of an ex­ist­ing de­sign. There is no nat­u­ral path laid out to god­hood, no Level at­tribute that you can neatly in­cre­ment and watch ev­ery­thing else fall into place. It is an adult prob­lem.

Be­ing a tran­shu­man­ist means want­ing cer­tain things—judg­ing them to be good. It doesn’t mean you think those goals are easy to achieve.

Just as there’s a wide range of un­der­stand­ing among peo­ple who talk about, say, quan­tum me­chan­ics, there’s also a cer­tain range of com­pe­tence among tran­shu­man­ists. There are tran­shu­man­ists who fall into the trap of the af­fect heuris­tic, who see the po­ten­tial benefit of a tech­nol­ogy, and there­fore feel re­ally good about that tech­nol­ogy, so that it also seems that the tech­nol­ogy (a) has read­ily man­aged down­sides (b) is easy to im­ple­ment well and (c) will ar­rive rel­a­tively soon.

But only the most formidable ad­her­ents of an idea are any sign of its strength. Ten thou­sand New Agers bab­bling non­sense, do not cast the least shadow on real quan­tum me­chan­ics. And among the more formidable tran­shu­man­ists, it is not at all rare to find some­one who wants some­thing and thinks it will not be easy to get.

One is much more likely to find, say, Nick Bostrom—that is, Dr. Nick Bostrom, Direc­tor of the Oxford Fu­ture of Hu­man­ity In­sti­tute and found­ing Chair of the World Tran­shu­man­ist As­so­ca­tion—ar­gu­ing that a pos­si­ble test for whether a cog­ni­tive en­hance­ment is likely to have down­sides, is the ease with which it could have oc­curred as a nat­u­ral mu­ta­tion—since if it had only up­sides and could eas­ily oc­cur as a nat­u­ral mu­ta­tion, why hasn’t the brain already adapted ac­cord­ingly? This is one rea­son to be wary of, say, cholin­er­gic mem­ory en­hancers: if they have no down­sides, why doesn’t the brain pro­duce more acetyl­choline already? Maybe you’re us­ing up a limited mem­ory ca­pac­ity, or for­get­ting some­thing else...

And that may or may not turn out to be a good heuris­tic. But the point is that the se­ri­ous, smart, tech­ni­cally minded tran­shu­man­ists, do not always ex­pect that the road to ev­ery­thing they want is easy. (Where you want to be wary of peo­ple who say, “But I du­tifully ac­knowl­edge that there are ob­sta­cles!” but stay in ba­si­cally the same mind­set of never truly doubt­ing the vic­tory.)

So you’ll for­give me if I am some­what an­noyed with peo­ple who run around say­ing, “I’d like to be a hun­dred times as smart!” as if it were as sim­ple as scal­ing up a hun­dred times in­stead of re­quiring a whole new cog­ni­tive ar­chi­tec­ture; and as if a change of that mag­ni­tude in one shot wouldn’t amount to era­sure and re­place­ment. Or ask­ing, “Hey, why not just aug­ment hu­mans in­stead of build­ing AI?” as if it wouldn’t be a des­per­ate race against mad­ness.

I’m not against be­ing smarter. I’m not against aug­ment­ing hu­mans. I am still a tran­shu­man­ist; I still judge that these are good goals.

But it’s re­ally not that sim­ple, okay?

Part of The Fun The­ory Sequence

Next post: “Chang­ing Emo­tions

Pre­vi­ous post: “Failed Utopia #4-2