Growing Up is Hard

Terrence Deacon’s The Symbolic Species is the best book I’ve ever read on the evolution of intelligence. Deacon somewhat overreaches when he tries to theorize about what our X-factor is; but his exposition of its evolution is first-class.

Deacon makes an excellent case—he has quite persuaded me—that the increased relative size of our frontal cortex, compared to other hominids, is of overwhelming importance in understanding the evolutionary development of humanity. It’s not just a question of increased computing capacity, like adding extra processors onto a cluster; it’s a question of what kind of signals dominate, in the brain.

People with Williams Syndrome (caused by deletion of a certain region on chromosome 7) are hypersocial, ultra-gregarious; as children they fail to show a normal fear of adult strangers. WSers are cognitively impaired on most dimensions, but their verbal abilities are spared or even exaggerated; they often speak early, with complex sentences and large vocabulary, and excellent verbal recall, even if they can never learn to do basic arithmetic.

Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals—including certain social emotions—dominate more than they should.

“Both postmortem analysis and MRI analysis have revealed brains with a reduction of the entire posterior cerebral cortex, but a sparing of the cerebellum and frontal lobes, and perhaps even an exaggeration of cerebellar size,” says Deacon.

Williams Syndrome’s deficits can be explained by the shrunken posterior cortex—they can’t solve simple problems involving shapes, because the parietal cortex, which handles shape-processing, is diminished. But the frontal cortex is not actually enlarged; it is simply spared. So where do WSers’ augmented verbal abilities come from?

Perhaps because the signals sent out by the frontal cortex, saying “pay attention to this verbal stuff!”, win out over signals coming from the shrunken sections of the brain. So the verbal abilities get lots of exercise—and other abilities don’t.

Similarly with the hyper-gregarious nature of WSers; the signal saying “Pay attention to this person!”, originating in the frontal areas where social processing gets done, dominates the emotional landscape.

And Williams Syndrome is not frontal enlargement, remember; it’s just frontal sparing in an otherwise shrunken brain, which increases the relative force of frontal signals...

...beyond the narrow parameters within which a human brain is adapted to work.

I mention this because you might look at the history of human evolution, and think to yourself, “Hm… to get from a chimpanzee to a human… you enlarge the frontal cortex… so if we enlarge it even further...

The road to +Human is not that simple.

Hominid brains have been tested billions of times over through thousands of generations. But you shouldn’t reason qualitatively, “Testing creates ‘robustness’, so now the human brain must be ‘extremely robust’.” Sure, we can expect the human brain to be robust against some insults, like the loss of a single neuron. But testing in an evolutionary paradigm only creates robustness over the domain tested. Yes, sometimes you get robustness beyond that, because sometimes evolution finds simple solutions that prove to generalize—

But people do go crazy. Not colloquial crazy, actual crazy. Some ordinary young man in college suddenly decides that everyone around them is staring at them because they’re part of the conspiracy. (I saw that happen once, and made a classic non-Bayesian mistake; I knew that this was archetypal schizophrenic behavior, but I didn’t realize that similar symptoms can arise from many other causes. Psychosis, it turns out, is a general failure mode, “the fever of CNS illnesses”; it can also be caused by drugs, brain tumors, or just sleep deprivation. I saw the perfect fit to what I’d read of schizophrenia, and didn’t ask “What if other things fit just as perfectly?” So my snap diagnosis of schizophrenia turned out to be wrong; but as I wasn’t foolish enough to try to handle the case myself, things turned out all right in the end.)

Wikipedia says that the current main hypotheses being considered for psychosis are (a) too much dopamine in one place (b) not enough glutamate somewhere else. (I thought I remembered hearing about serotonin imbalances, but maybe that was something else.)

That’s how robust the human brain is: a gentle little neurotransmitter imbalance—so subtle they’re still having trouble tracking it down after who knows how many fMRI studies—can give you a full-blown case of stark raving mad.

I don’t know how often psychosis happens to hunter-gatherers, so maybe it has something to do with a modern diet? We’re not getting exactly the right ratio of Omega 6 to Omega 3 fats, or we’re eating too much processed sugar, or something. And among the many other things that go haywire with the metabolism as a result, the brain moves into a more fragile state that breaks down more easily...

Or whatever. That’s just a random hypothesis. By which I mean to say: The brain really is adapted to a very narrow range of operating parameters. It doesn’t tolerate a little too much dopamine, just as your metabolism isn’t very robust against non-ancestral ratios of Omega 6 to Omega 3. Yes, sometimes you get bonus robustness in a new domain, when evolution solves W, X, and Y using a compact adaptation that also extends to novel Z. Other times… quite often, really… Z just isn’t covered.

Often, you step outside the box of the ancestral parameter ranges, and things just plain break.

Every part of your brain assumes that all the other surrounding parts work a certain way. The present brain is the Environment of Evolutionary Adaptedness for every individual piece of the present brain.

Start modifying the pieces in ways that seem like “good ideas”—making the frontal cortex larger, for example—and you start operating outside the ancestral box of parameter ranges. And then everything goes to hell. Why shouldn’t it? Why would the brain be designed for easy upgradability?

Even if one change works—will the second? Will the third? Will all four changes work well together? Will the fifth change have all that greater a probability of breaking something, because you’re already operating that much further outside the ancestral box? Will the sixth change prove that you exhausted all the brain’s robustness in tolerating the changes you made already, and now there’s no adaptivity left?

Poetry aside, a human being isn’t the seed of a god. We don’t have neat little dials that you can easily tweak to more “advanced” settings. We are not designed for our parts to be upgraded. Our parts are adapted to work exactly as they are, in their current context, every part tested in a regime of the other parts being the way they are. Idiot evolution does not look ahead, it does not design with the intent of different future uses. We are not designed to unfold into something bigger.

Which is not to say that it could never, ever be done.

You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness. A Friendly AI programmer could do even more arcane things to make sure the AI knew what you would-want if you understood the possibilities. And then the AI could apply superior intelligence to untangle the pattern of all those neurons (without simulating you in such fine detail as to create a new person), and to foresee the consequences of its acts, and to understand the meaning of those consequences under your values. And the AI could upgrade one thing while simultaneously tweaking the five things that depend on it and the twenty things that depend on them. Finding a gradual, incremental path to greater intelligence (so as not to effectively erase you and replace you with someone else) that didn’t drive you psychotic or give you Williams Syndrome or a hundred other syndromes.

Or you could walk the path of unassisted human enhancement, trying to make changes to yourself without understanding them fully. Sometimes changing yourself the wrong way, and being murdered or suspended to disk, and replaced by an earlier backup. Racing against the clock, trying to raise your intelligence without breaking your brain or mutating your will. Hoping you became sufficiently super-smart that you could improve the skill with which you modified yourself. Before your hacked brain moved so far outside ancestral parameters and tolerated so many insults that its fragility reached a limit, and you fell to pieces with every new attempted modification beyond that. Death is far from the worst risk here. Not every form of madness will appear immediately when you branch yourself for testing—some insanities might incubate for a while before they became visible. And you might not notice if your goals shifted only a bit at a time, as your emotional balance altered with the strange new harmonies of your brain.

Each path has its little upsides and downsides. (E.g: AI requires supreme precise knowledge; human upgrading has a nonzero probability of success through trial and error. Malfunctioning AIs mostly kill you and tile the galaxy with smiley faces; human upgrading might produce insane gods to rule over you in Hell forever. Or so my current understanding would predict, anyway; it’s not like I’ve observed any of this as a fact.)

And I’m sorry to dismiss such a gigantic dilemma with three paragraphs, but it wanders from the point of today’s post:

The point of today’s post is that growing up—or even deciding what you want to be when you grow up—is as around as hard as designing a new intelligent species. Harder, since you’re constrained to start from the base of an existing design. There is no natural path laid out to godhood, no Level attribute that you can neatly increment and watch everything else fall into place. It is an adult problem.

Being a transhumanist means wanting certain things—judging them to be good. It doesn’t mean you think those goals are easy to achieve.

Just as there’s a wide range of understanding among people who talk about, say, quantum mechanics, there’s also a certain range of competence among transhumanists. There are transhumanists who fall into the trap of the affect heuristic, who see the potential benefit of a technology, and therefore feel really good about that technology, so that it also seems that the technology (a) has readily managed downsides (b) is easy to implement well and (c) will arrive relatively soon.

But only the most formidable adherents of an idea are any sign of its strength. Ten thousand New Agers babbling nonsense, do not cast the least shadow on real quantum mechanics. And among the more formidable transhumanists, it is not at all rare to find someone who wants something and thinks it will not be easy to get.

One is much more likely to find, say, Nick Bostrom—that is, Dr. Nick Bostrom, Director of the Oxford Future of Humanity Institute and founding Chair of the World Transhumanist Assocation—arguing that a possible test for whether a cognitive enhancement is likely to have downsides, is the ease with which it could have occurred as a natural mutation—since if it had only upsides and could easily occur as a natural mutation, why hasn’t the brain already adapted accordingly? This is one reason to be wary of, say, cholinergic memory enhancers: if they have no downsides, why doesn’t the brain produce more acetylcholine already? Maybe you’re using up a limited memory capacity, or forgetting something else...

And that may or may not turn out to be a good heuristic. But the point is that the serious, smart, technically minded transhumanists, do not always expect that the road to everything they want is easy. (Where you want to be wary of people who say, “But I dutifully acknowledge that there are obstacles!” but stay in basically the same mindset of never truly doubting the victory.)

So you’ll forgive me if I am somewhat annoyed with people who run around saying, “I’d like to be a hundred times as smart!” as if it were as simple as scaling up a hundred times instead of requiring a whole new cognitive architecture; and as if a change of that magnitude in one shot wouldn’t amount to erasure and replacement. Or asking, “Hey, why not just augment humans instead of building AI?” as if it wouldn’t be a desperate race against madness.

I’m not against being smarter. I’m not against augmenting humans. I am still a transhumanist; I still judge that these are good goals.

But it’s really not that simple, okay?

Part of The Fun Theory Sequence

Next post: “Changing Emotions

Previous post: “Failed Utopia #4-2