If brains are computers, what kind of computers are they? (Dennett transcript)

In 2013, Daniel Dennett gave this talk at an Oxford conference, and it was uploaded to FHI’s YouTube channel. I remember this being one of my favourite talks by Dennett. I rewatched it yesterday, and found it had a surprising amount to say about subjects we’ve discussed lately on LessWrong, pertaining to cultural evolution, embedded agency, and design principles in biology.

I’ve made a transcript and edited it together below, with some section titles and images and light editing for clarity. (If Dan or someone else would like this edited/​removed, I’ll of course do that, I just thought it was really interesting and wanted to discuss it on LessWrong.) Dennett later turned these ideas into a book and gave more talks, but they were all directed at a less technical audience than LessWrong, so I found I enjoyed this talk more.

Here is the transcript.


Thanks.

I want to apologize for not attending the afternoon sessions. I was back in my room trying to digest and accommodate and incorporate a few little items from what I’ve been hearing today. I figured that would happen, and sure enough it did. So you will probably see some little fossil traces of things you said or heard in earlier talks.

Thanks to the organizers for putting this together and for inviting me. It’s been as informative as I expected and hoped it would be and I certainly enjoyed the whole thing.

Competence Without Comprehension

We’ve been talking about “Yeah, of course the brain is a computer!” for a long time, and we run into a stone wall of disbelief and fear and loathing in many quarters. I think one thing is you have to stop and ask why that is. Why is this such a repellent idea to so many people? Of course, one reason might be because it’s false. Brains really aren’t computers, but that’s not the new wrinkle that I’m going to be putting forward today. But I do hope there are some new bits.

Some of you will have heard parts of this before or seen parts of this before, I dare say, but rest assured, there’s some stuff you haven’t seen before. So don’t just do your email or something. You might be surprised.

Think of all the people that say this “Brains aren’t computers”. They’re not idiots. There are some good scientists. Roger Penrose insists on this, Gerry Edelman insists on this, Jaak Panksepp. These are industrial strength scientists, and they all dislike the idea that brains are computers. I think in every case you want to say, well, they’re not the sort of computer you say they’re not. Yeah, you’re right about that. But that’s not all there is to computers.

Then there’s philosophers, of course, my dear friend John Searle and Raymond Tallis in this country, the poor man’s John Searle. I wasn’t going to be sarcastic. I’m sorry, I apologize. I probably won’t keep to my resolution anyway. So it’s not as if people have just conceded that we’re right, that brains are computers. So let’s look at it more closely.

Well, if brains aren’t computers, what are they? Well, they’re not pumps. They’re not factories. They’re not purifiers. They take information in and give control out.

Of course, they’re computers.

That’s almost as good as a very latitudinarian definition of a computer. They’re information processing systems, organs. They’re just not the kind of computers that critics are imagining.

Now, why the computer-phobia in the first place? Here I want to return to a theme that’s been in my work for a long time because I think there’s a very deep seated reason for that fear and it’s related very directly to the fear that many people have, especially in my country with evolution.

Here’s my favorite outraged quote from Robert Beverly Mackenzie.

In the theory with which we have to deal, Absolute Ignorance is the artificer so that we may enunciate as the fundamental principle of the whole system that in order to make a perfect and beautiful machine, it is not requisite to know how to make it. This proposition will be found on careful examination to express in condensed form this central purport of the theory and to express in a few words all Mr. Darwin’s meaning, who by a strange inversion of reasoning seems to think absolute ignorance fully qualified to take the place of absolute wisdom in all of the achievements of creative skill.

Bingo. He’s got it. That’s right. That is what Darwin is saying and it is a strange inversion of reasoning.

Now, let’s look at my other great hero, Alan Turing. Alan, as he was called, my friend Alan this morning in that clip. Here’s Turing’s strange inversion of reasoning. Before Turing, these were computers: Most of them wore dresses. They had degrees in mathematics. Very often they were people. That was a job description. And what Turing realized is that, although in the old days computers had to understand arithmetic, had to appreciate the reasons for what they were doing, this was fairly high tech job—this was not necessary.

Now let’s put the two together. Here’s Darwin’s strange inversion of reasoning. By the way, those caps were in the original in the Mackenzie. He was in high dudgeon when he wrote that. Now we’re going to vary it.

In order to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is.

Very similar idea. What they unite in showing, and it is a strange inversion of reasoning, is competence without comprehension. This is my bumper sticker for both ideas.

In the case of evolution, the process of natural selection itself is competent, has no comprehension, doesn’t need comprehension. It brilliantly produces all sorts of marbles of design, competence without any comprehension. No mind at all. Turing launders all the comprehension out of the process of arithmetic makes an automatic clueless arithmetic do-er, and then shows how you can build up and build up and build up from there.

In other words, in both cases, mind, consciousness and understanding is an effect not a cause. It’s not the beginning cause, it’s in fact a rather recent effect. So if Darwin explains the intelligent creator—or explains away the intelligent creator—Turing explains the God-like mind. Or explains away the God-like mind.

I think right there is the main source of the antipathy. People want God-like minds. They want their minds to be flipping mysterious. They want them to be a little miracle between the ears. Any engineering type theory of what the mind is is offensive to them. That means that they are motivated to caricature the idea of computers in the first place. They don’t want to think well of computers because they need to keep the very idea at bay.

I like to say, yeah of course we’re robots made of robots made of robots. This idea is, this is a hateful image to many people and so they misimagine computation so that they can go on confidently denying that they could be any such thing. We know there’s a whole tradition in the 20th century of eminent people demonstrating that strong AI is impossible, whether it’s Penrose or Searle or others. People lap up those arguments and if you try to show them what’s wrong with the arguments, they get very impatient with you. They like the conclusion so well they don’t want to be nitpicked over the details. They love the fact that an eminent Berkeley professor has shown that there’s no such possibility as strong AI, or that a great mathematical physicist has shown this. Don’t bother me with the details, I love the conclusion.

Now this mis-imagination is not hard to understand. What they do is they compare the algorithms with which they’re familiar. Things like Word and Photoshop and Email, Google Desktop, with brains and they say “Brains just aren’t like that!” Cold, orderly, ultra efficient, rigid, composed of units that are mindless little switches.

But they are right. Brains aren’t like commercial software, the kind that they use. Brains aren’t made of Silicon. They’re not Von-Neumann machine architectures. But there are other kinds of computers. That’s what I really want to talk about.

I want to propose a particular difference. It’s not just that they’re protein versus silicon. That’s important I guess, but not that important. It’s not just that they’re analog versus digital. First of all, they’re not entirely analog, but in any case, that’s not the big difference. Not that they’re just that they’re parallel rather than sequential serial. These are all well examined differences. The one that I want to look at—less examined, although more examined every day—is this: competitive rather than cooperative.

Competitive Rather Than Cooperative

The default image that most people have of computation is that it’s ultra efficient. There’s no waste motion, no cross-purposes, redundancy only for safety. It’s hierarchically organized, routines call sub-routines which have to answer and there’s controlled prioritization.

And there’s a very deep reason for this. Money, time. You want your software to run as fast as it can in whatever hardware you’ve got available so you make it ultra efficient, you don’t have it lollygagging around. You strip it down, make it as efficient, as bureaucratically neat and clean and effective as you can make it.

Now, there are opponent processes, but they’re what I’m going to call friendly opponent processes. They are politely orchestrated. There’s a master of ceremonies in the control system that invites the participants to do their battle and then declares the winner and ushers everybody off. It’s very friendly and polite. And I’m talking about a more competitive architecture.

Compare these models with though, let’s say the stock market. Now, here’s a question. Is this a computational phenomenon? Blood, sweat, tears, bodies screaming.

Yeah, it’s a computational phenomenon. Information in, information out. Shares get sold and it’s a very abstract thing that’s going on there in the stock market. Right now we do it with not terribly attractive human beings who earn their living wresting every last penny out of every last share. But that’s an important job to do, and they have even some responsibility to be as it were a buyer of last resort. So that’s the way we do it now. But it’s clear that it can be completely, the people can be removed. In fact, that’s happening at a great pace already. A completely computerized stock exchange, I don’t think one yet exists, but we’re on the verge of it all the time. The people are being eased out of the process.

I’m reminded of Bill Wood’s great older natural language processor, back in the old GOFAI days of natural language processing. He had the classic GOFAI boxology: the acoustic signal came in, and the first thing that had to happen was that there had to be a spectrogram made. And then you need to have the spectrogram analyzed to turn it into a conjecture [about] what phonemes were there. And when they first built the system, they didn’t know how to do that. So they hired a phonologist, to sit there looking at the spectrograms and [make] guesses about what the phonemes were. He couldn’t hear through the headphones, he had to look at the spectrogram and make guesses. And he was a pretty good guesser. But of course, it was part of a generate and test loop.

They had a homunculus in the box, they had a real homo sapiens in the box, until they got the rest of the system built. Then they went back and they built the hypothesis generator, the phonology hypothesis generator. Basically, I’m saying the homunculi are about to be discharged from the stock market and so it will be, very clearly, a computational process. So yeah, it is a computational process and it’s much more like what I am supposing the brain is.

I want you to compare Marx. Remember Marx: “To each according to his needs, from each according to his talents.” Wonderful idea. Notice that that’s a pretty good model for your average computer. Sub-routines never have to worry whether they’re going to get enough cycles to do their work or whether they’re going to get enough electricity to stay up and powered. The central committee will take care of everything. They just do their stuff. Happy slaves. They just dutifully do their stuff. They don’t have any problems. That’s the ideal Marxist structure.

Compare that with dog-eat-dog, laissez-faire, free-for-all capitalism. And I’m saying the brain is more like the latter. There’s no central or higher control. Cooperation is an intermittent achievement, not a precondition.

Now, I realize at this point, especially coming from America, that I have to do a little preemptive rooting out of an irony. Milton Friedman, Ayn Rand, “Am I extolling them? Are they my heroes?” No, no, not at all. No, I’m quite far left as far as American politics goes. But, still, centrally planned economies don’t work, and neither does centrally coordinated top-down brains. Even the best cognitive architectures that I know of over the years have been too disciplined, too polite, too bureaucratic. And of course they are made up of millions of identical elements, and that’s profoundly unlike the brain. I had a really eye opening conversation with Rod Brooks about this a few months ago. He has a hobby. He’s building what you might call a steampunk computer, a pre-electronic computer. It’s got solenoids and switches. Just as a stunt, he’s building it. Fills a room about as big as this auditorium and it’s going to run probably, what, 10 orders of magnitude slower than your desktop and have a memory that would be just invisible in today’s computers. But he’s building it. It’s his project.

He says the hardest thing is making all the registers and the larger gates alike. The tolerance, just the engineering tolerance, is difficult to achieve. And if they aren’t all really alike, the thing is not going to run well. We just take that for granted in our chips these days. But it’s a feature of the hardware, without which the computers that we’re using all the time would not work. And it’s a feature of the hardware that is not to be found in the brain.

You’ve got 100 billion, 100 billion neurons and 10 times more astrocytes. And the astrocytes, we’re now beginning to learn, to my dismay, in a way are doing some work. They’re doing some information processing work. So the brain has become more and more complicated. And they’re all different. There are no two alike. Nobody knows how, I think, nobody knows how to build a basic architecture. A CPU for a computer where the fundamental elements are 100 billion individualistic neurons. You have to think about just what it would take to make that work.

This idealization has been around ever since McCulloch and Pitts. The very dawn of cognitive science. On the left, we see a sketch, an old sketch.

[Note: This is not the image that Dennett used in the talk, which was unavailable.]

I deliberately went out and got an old picture from the fifties, of a neuron. Pretty accurate. We haven’t changed our mind too much about neurons, although they’ve got a lot more complicated.

On the right, you see McCulloch-Pitts logical neuron.

When I first learned about McCulloch-Pitts logical neurons in about 1963, I was entranced. I thought, Oh, this is so cool. Multiple inputs, either positive or negative threshold mechanism. Little simple adding device that sees whether or not the excitation exceeds the inhibition by the threshold amount and then a single branching output. Cool, you could build some really interesting networks out of those and really make things happen. That’s got to be what a neuron is.

And of course, that was just exactly what McCulloch and Pitts said and all the classic work which led to many models, [and] was the basis for somewhat the improvements of connectionism.

All of these, using these logical neurons as good idealizations. And I’m suggesting now that… it was very useful, very useful at the time. But it’s one of those idealizations that has outlived its usefulness and we have to throw it away.

Just to drive home the points, I don’t know if you’ve seen these wonderful micrograph elapsed time animations– not animations, but actual videos, microscope videos, of neurons in a petri dish seeking out connections. Groping around, little exploratory, dendritic spines going out, moving along. Now, that’s what neurons are like.

I was going to show, you probably all seen it, the bio visions, wonderful animation of what goes on inside a cell with the motor proteins and all. But it’s absolutely stunning. Google “bio visions”. I have it right here on my laptop, but for some reason it’s incompatible with the projector so I couldn’t show it to you.

Bio visions… you will see the breadth and complexity of what goes on inside every cell and every neuron. That’s where the robots are, down among the motor proteins. The individual neurons are exploratory little devils, and they’re sending out processes and then reabsorbing them. Something’s controlling that. This is, if not goal-directed, it is at least a goal-guided behavior. They’re trying to do something. What they’re trying to do is they’re trying to network, they’re trying to improve their influence by making more connections with other neurons.

You have to remember that every neuron is a descendant of eukaryotes that survived on their own for over a billion years. Single celled organisms. This is my favorite picture of the tree of life, Leo Eisenberg’s. It’s really very pleasing.

So here’s the beginning of life and… was it bacteria or archaea, it’s not quite clear. But then we had the great eukaryotic revolution and everything else that you can see, those are all eukaryotes. Basically, to a first approximation, any life form that’s visible to the naked eye is a eukaryote, a multicellular eukaryote.

Neurons are just direct descendents of some of those unicellular eukaryotes that fended for their own once they’d descended from the prokaryotes, and they had a long time to fend for themselves and they succeeded. Then their descendants got– not so much trapped as as enlisted, in multicellular organisms where of course they’ve changed quite a bit.

But maybe they haven’t changed all that much. Maybe the right way to think of them is as little, relatively stupid, but still competent agents, trapped, imprisoned in a large collection of other such agents inside a multicellular body. My post-docs jokingly call this theme in our ideas, our thinking brain wars. That’s how undisciplined we think it is. Where there’s real, not notional, competition, micro agents where their own agendas.

Now, among the people who’ve had ideas like this, Tecumseh Fitch, his paper Nano Intentionality and Biology and Philosophy. On the surface it’s a criticism of my line on intentionality and original intentionality, but the better reading, the more fruitful reading is it’s a look at that issue and a useful one indeed. Eric Baum—how many of you have read Eric’s book, What is Thought? It has some lines very much along this line. He gave me this great term for the kind of architectures that we’re turning our back on, Politburo architectures. Echoing the old Soviet Union. Those are the kind of control systems that we’re suggesting are not to be found in the brain.

Sebastian Seung is a young neuroscientist at MIT and in his presidential address, or keynote address at, Society for Neuroscience couple of years ago, he spoke about selfish neurons and hedonistic synapses. He has an earlier period way back to 2003 where he talks about hedonistic synapses. Now, these are interesting metaphors. I think we should take them quite seriously. I think that once we look at the brain as composed of elements of selfish neurons, we get a much better handle on some of the phenomena that we’re trying to understand.

Worrying Abstractions About Information Processing?

Now, as you know for my whole career I’ve been a defender of strong AI and a defender of the computer metaphor for the brain, and I’m still a defender of it, but the best, the most thought provoking critic that I’ve encountered lately is Terry Deacon. I wonder how many of you have read Terry, have many of you read this book? Well, I’m a big fan of his earlier book, The Symbolic Species, which I think has got many really good ideas which have not been properly acknowledged and taken up and incorporated into the thinking of many people in cognitive science. This book is a more ambitious book by half and it’s not an easy read, but I decided it’s a very important book even though I don’t agree with it at all.

Let me put it this way. I, and I bet most of you, have been inoculated against what I’m going to call ‘romantic science’ on these issues. We’ve had a whole tradition of romantic critics of the computer and the engineering metaphors. We have Maturana with autopoiesis and Varela and Evan Thompson—my former post doc—and many others. You can form your own list, Prager Gene, and there’s a whole lot of, I just call them romantic scientists, who have argued that this whole idea of artificial intelligence is a travesty, that it misses the super wonderful essence of life and mind in one way or another. So I’ve been pretty well inoculated against this and yet Terry’s book really seems to me to have new arguments worth taking quite seriously. And I’m just going to highlight a couple of items from them. (I have a review that’s coming out in quarterly review of biology, but it’s not coming out until December and it’s embargo till then so I can’t put it on my website. But in due course that will be out. [Review is now available here.])

Deacon has the following arresting idea about Shannon and Shannon information. We tend to applaud and for very good reason Shannon’s brilliant divorce of information from thermodynamics, his creation of the basic structure for information theory, the brilliant abstraction of communication between a sender and a receiver over a channel. And of course, without this wonderful theoretical analysis, we wouldn’t have the bits and bytes and terabytes and bioinformatics, and all the rest of the language of information theory. But of course, we also know that Shannon information is not the same as the information that we’re usually talking about in cognitive science, which is sematic information one sort or another. That’s still an embarrassing lacuna in our foundations.

I think, maybe not in AI, but I think a great many people in cognitive science and cognitive neuroscience and cognitive psychology are very comfortable throwing the term information around because they think they are licensed by Shannon information. No, they’re not, and that is an unrepaired pothole in our road.

But amazingly, Deacon says, wonderful though Shannon information theory is, the flaw lies right at the outset. In that—as you will remember, the diagrams that show a channel with a sender and a receiver and a code, and then you work out from the ensemble how many bits are being sent and so forth—this right there, the sender and receiver were embarrassing homunculi that really have to be discharged one way or another and we’ve never really figured out how to discharge them. It presupposes two cooperative agents.

What happens to those agents when we turn to semantic information? Well, if you’re like me, you’ve all along basically assumed that a computer itself can stand in. That we don’t need a human agent, we can just have an artificial agent, a computer agent, that it can be the touchstone, the receiver of information. Also, we say, we won’t talk about the sender when the sender is the whole environment. We can talk about the sender when the sender is the eye, what the frog’s eye tells the frog’s brain. But we don’t talk about the tree sending information to the eye. We don’t talk about that.

I think this is even very plausible, it’s still very plausible. After all, we’re quite careful. We say, look, in economics you can replace sales clerks with vending machines with no loss. They can serve all the functions quite well and we can take Shannon senders and receivers and replace them with artificial agents, computer agents and carry on with no difficulty. Well, Deacon thinks not.

Why? What does he think? He says this (this is not quoting him). AI researchers have postponed consideration of such phenomena as energy capture, unaided reproductions, self-repair and open-ended self revision. Those are hard problems. We’ll save those for later. Expecting you to reap the benefits of this simplification by first getting clear about the purely informational phenomena of learning and self guidance. Right?

It seems to me that does accurately describe the practice.

And if it does, this is a gamble. We’re saying, look, yeah, yeah, yeah. Eventually I have to worry about making a whole organism that can capture energy and has a metabolism, but that’s related. I don’t need that. Hey, I’m doing a chess program. Chess program just gets its energy from the plug that stick and goes in the wall. We don’t worry about that for now. And ditto for self-repair and of course reproduction.

Deacon thinks, however, that by abandoning the link to thermodynamics, we settle for what he calls parasitical computational architectures. And he thinks this is a fundamental feature of the architecture that is parasitical. And this is something that we recognize when we confront Ray Kurzweil and he talks about how the singularity is near and how people aren’t going to be necessary. We say, wait a minute, who’s going to repair the robots? Who’s going to repair the hardware when the second law of thermodynamics takes its toll?

Sometimes people say, well, the superintelligences will keep us around as custodians and repairmen for a while until they figure out how to do that. What Deacon is saying is, the architecture you get when you leave off, when you finesse, when you postpone the metabolism problem, the cell and the self-repair problem to take the two most obvious, just the architecture you’re going to get is just going to be the wrong architecture. It’s not going to do the work.

Now, why on earth would anybody think that? Well, I’m going to try to show you.

Thinking Tools and Feral Neurons

So, according to Deacon, if you think a self-repairing, self-replacing computer is something to design later, you’re mistaken. Well, we’ll sneak up on why that might be. So think about the brain’s plasticity. It is stunning. Many experiments show how, for instance, let’s think of Mike Merzenich’s experiments: first you map the somatosensory cortex of a rhesus monkey and see where each finger is represented. Then you suture two fingers together for a few weeks. Then you go in and map again, and you see that the area has shrunk. It’s now one area that’s representing the two that were together. And the neural tracks that were previously devoted to looking at those two fingers are now being used for other purposes. That’s a very dramatic example and there’s a tremendous amount of that plasticity. Every time people go looking for it, they find it even to the point where it makes chronic experiments with living brains really hard to do because when you go back, your landmarks may be just wrong.

That’s a real feature and as far as I know, there’s nothing that is even a very good approximation of that in the hardware that we are designing our programs for.

Well, what explains the brain’s plasticity? Very simple. There’s neurons looking for work. Neurons, if they don’t get work, they’re going to die. They’re going to be absorbed, reabsorbed and their parts reused.

François Jacob had the great saying that the dream of every cell is to become two cells. That’s very true in general, but neurons have lost that dream because they don’t reproduce. So they don’t have offspring in the future, but they just want to stay alive as long as they can. They are designed by evolution to do that, and to be ready to exert themselves and use energy to grope about, trying to find a better way of making a living locally. Maybe that economic structure also is a prerequisite for truly creative, open ended intelligence. I’ll say a bit about why that might be.

Here’s a slide I’ve been using for a year or two. Some of you may have seen it before. These two very similar structures are both artifacts created by organisms. The one on the left is an Australian termite castle and the one on the right of course is Gaudí’s famous Sagrada Familia church in Barcelona.

They are very similar in outward appearance, but the R&D that went into them, the design and construction, profoundly different. In the case of the termite castle, this is bottom-up, mindless, local control building. There’s a queen termite, but she’s not the boss. She doesn’t know what she’s doing. There’s no blueprints, there’s no plans, there’s no hierarchy. It’s just individual termites doing their mindless little thing all under the control of local triggers and their own genetic dispositions, and these amazing structures get built. Competence without comprehension. They haven’t the faintest idea what they’re doing and they don’t have to.

Gaudí - this is so convenient for me—is the very caricature model, the very paragon of the intelligent designer. The genius, the charismatic genius with the manifestos and the blueprints. And he’s bossing a team of underlings who are bossing a team of underlings who are bossing a team of workers, and it’s just top-down dictatorial, intelligent design or would-be intelligent design from Gaudí. So here we see two similar artifacts made by processes which are fundamentally different in their underlying organization. The difference between clueless termites and Gaudí is pretty astounding.

But now think of this. What have you got between your ears?

200 million termites.

How do you get a termite colony type brain to become a Gaudí type mind? We can understand how termite colonies can be pretty good cognitive agents. They’re really clever. I mean, termite colonies as a whole engage in some pretty clever and quite discriminating and adaptive behaviors. And a lot of animals of course do too. But Gaudí is, there’s something very different going on there. And the thing is: how is that possible when what he’s got between his ears looks for all the world, like just a larger version of—not a million termites, but 200 billion neurons, but the neurons are even more clueless than the termites? How do you get 200 billion neurons with no boss neuron, no king? How do you get them organized to do the stuff that makes it possible to be a Gaudí?

Well, this is the key, I think. My former student, Bo Dahlbom, was saying:

You can’t do much carpentry with your bare hands. You can’t do much thinking with your bare brain.

Animals have bare brains, we don’t. We have lots of thinking tools. And the thinking tools that we put into our brains, the thinking tools that find their way into our brains are the key, I think, to the transformation, the creation of the virtual architecture on which our intelligent minds depend. Gaudí didn’t have a bare brain. He was teeming with mind tools. Memes in short. Memes like viruses are not alive, but like viruses, they’re subject to natural selection.

I wonder if this audience is as repelled by the idea of memes as many of the audiences that I have to deal with. In the humanities I find that people have a really visceral reaction against it. I’m slowly eroding that, I think, and I’ve got some more considerations. That’s what I’m working on right now, is an enlarged and better and improved theory of cultural evolution, which starts out very Darwinian, and then gets less and less Darwinian.

There’s no question that viruses are absolutely fine candidates for natural selection, but they’re not even alive. What are they? They are, as I like to say, they are ‘strings of nucleic acid with attitude’. By attitude, I just mean something about their shape gives them a disposition to foster their own replication under various circumstances, and the ones that are really good at this thrive and the ones that aren’t go extinct. So you don’t have to be alive, you don’t have to have a metabolism, you don’t have to have sex to be a proper item for natural selection. And what are memes: they’re software with attitude, they’re data structures with attitude, same deal. They can evolve by natural selection by differential replication, same as viruses do.

Here I’m going to give you a really unsettling possibility. And of course, it may not amount to anything in the end, but I think it’s more than fun to think about: Feral neurons.

You know about feral animals. Domesticated animals like pigs, sheep, if you let them go feral, if you release them from domestication, they very quickly resume a lot of their wild talents.

Why? Because it turns out that the domestication and breeding have not erased those genes. They’ve just commented them out. They’re still in there, they’re just not expressed anymore. Or they’re almost in there, they’re beginning to fall apart, but they’re still in there. A very small genetic change simply erases the brackets and they’re commented back in and then you get, in just a few generations, you get pigs that are very much like wild boars, sheep that are very much like wild sheep and so forth. Those are feral animals.

I’m suggesting that maybe some of the neurons in your brain are encouraged to go feral to regain some of the individuality and resourcefulness of their unicellular ancestors from half a billion years ago.

Why? Well, they’re released from domesticity where they’ve been working in the service of animal cognition, uncomplainingly, so that their talents can be exploited by invading memes competing for influence. That there’s a coevolution between genetic and cultural evolution. And once a brain becomes a target for meme invasion, the relaxation or the re-expression of some otherwise suppressed talents of neurons creates a better environment, a better architectural environment, for the competition of memes to go forward. Armies of the descendants of prisoners, now enslaved by the invaders.

Did you just get the idea? I want to make sure you see what might be interesting about it. It’s that in general, the cells of a multicellular organism are dutiful little slaves, happy slaves. They don’t strike out on their own. They’re good team players. If you’ve seen A Bug’s Life, Woody Allen plays the role of the renegade ant. Or is it a bee? Well, in any case. Most of your neurons are, I think, they’re cooperative neurons too. I am suggesting that it’s possible that cortical neurons in a creature that’s strongly under selection for imbibing culture may have their body plans and their talents revised in the direction of being more individualistic, more selfish so that they are not as regimented in their behavior.

Doing Reductionism on Darwinism

Now, another book – this is uncle Dan’s list of books I think you should read – Peter Godfrey Smith’s Darwinian Populations and Natural Selection. There’s a review of that on my website. That isn’t embargoed, so if you’re interested in it… But one of the ideas in this, which I think is really a wonderful thinking tool is this idea of Darwinian spaces. This is one of Peter’s diagrams.

A Darwinian space is just a three dimensional cube of this. And there’s many dimensions you could put on the axes, but you can use this both to plot trajectories and to look at similarities in variation in differences. And this is one of the basic ones.

Way up in the upper right hand corner, (1,1,1), we have paradigm Darwinian processes. These are the parade case processes that are Darwinian if anything is. In the lower left hand corner, these are phenomena that aren’t Darwinian at all. And the three dimensions that we’re looking at here are continuity, smoothness of fitness landscape. If you’ve got a nice smooth fitness landscape, then hill climbing works and that’s a requirement for Darwinian process. You have fidelity of heredity along here. If you don’t have high enough fidelity of replication, you get the error catastrophe for instance. Then he has dependence of realized fitness differences on intrinsic properties, which is along this dimension. Where that’s not the case, you get drift. Of course, drift is also where, well, drift can be anywhere along here, but be particularly strong where there’s a rugged landscape.

But look, he’s put human cells up here. And that’s because, although human cells, when they proliferate – both in the fetus and in early life and development – there’s a population explosion. And it’s a quasi-Darwinian process. It’s quasi-Darwinian particularly in the S-dimension, because what’s important is not so much the intrinsic properties of the cells – for instance, neural stem cells are pretty much all alike. It’s location, location, location, which is not an intrinsic property. So where you have selection for location, you have an only quasi-Darwinian process. So I hope you can see, this is a diagram which permits you to look at both variations, look at phenomena that are only somewhat Darwinian.

Here’s a nice point about this. We want to be Darwinians about Darwinism itself and recognize that it doesn’t have an essence. There are hemi, semi, demi varieties of Darwinian processes. So we can be Darwinian about Darwinism.

And we can also look at what he calls de-Darwinization. De-Darwinization is when a phenomenon over evolutionary time moves away from Darwinian towards less Darwinian. Human cells were great case of that, because their ancestors were selected for their individual intrinsic fitness. Now that they’ve become part of a multicellular organism, they’re now being selected not so much for intrinsic features, but for location. So there’s been a de-Darwinizing. He has other examples of de-Darwinizing phenomena.

Cultural Evolution

But now I want to look finally at culture. This is my attempted irenic—not ironic, but irenic—peaceful resolution of a problem that scares so many people when you talk about culture. Because here’s what I want to say.

Now we’re going to turn Darwin’s inversion upside down. Up in this corner, we’re going to put intelligent design, but meaning by that, not God, but intelligent human designers, people like Turing and Gaudí and Picasso. And down in the lower corner, we’re going to put Darwinian design, mindless bottom-up, purposeless. So the dimensions I have here are comprehension, local versus global, and random versus directed search.

If you’ve got the blindest of blind trial-and-error search in a process where there’s minimal comprehension, then you’re down here, right here. The termite capsules are somewhere in the middle ground.

Oh, there’s a nice example. I don’t have the slides that goes with it, but I’ve put it on this slide. I’ll tell you about the Polynesian canoes. This is lovely quote that Godfrey Smith comes up with by a French philosopher named Alain who says, he’s actually talking about not Polynesian canoes, but about the Breton fishing vessels. And he says, “The boats are copied. The ones that come back are copied.” And if the sea is doing the selection, and this is Darwinian, it’s basically, the boat builders, they don’t have to know anything about boat building. They just copy what comes back. Over time, their boats become more and more seaworthy.

Now, they may have lots of ideas about boat design. They may be right, they may be wrong, it doesn’t really matter that much. They may be able to speed up the design improvement process a little bit, in which case they’re over here a little bit more, but it’s actually not all that necessary. Once they’ve got a good design, they just cling to it. They’re very conservative and it works and they don’t have to… They can be the beneficiaries of their lovely boat designs without understanding why these are good. In the same way that the termites can be beneficiaries of their beautiful castle without understanding why it’s good. This is competence without comprehension.

But, what’s happened is, when culture started out, human culture, it started out down in this corner. Now we’re moving up towards intelligent design. What’s interesting is, it’s even happening to the word meme, which started out as a word for an author-less cultural item which spread like a virus and now you have competitions on the internet to intelligently design a meme that will go viral. That’s really contrary to what Dawkins was talking about at the time. But what’s happened is those memes are memes that are living or purporting to live up in the high ground here.

Now, what’s nice about this diagram, I think, is we can look at the diagram and we can say traditional humanities approaches to culture. Think of all culture is existing up here at the top levels and pretty much over in this corner. Culture consists of brilliantly designed by geniuses, treasures that are appreciated. We understand why they’re great, that’s why we preserve them and maintain them and bequeath them to our children as great treasures and so forth. An economic model works very well for that. These are treasures.

But a lot of human culture isn’t like that. A lot of human culture is not even useful. It just replicates because it can, it’s like the cold virus. What’s it good for? Nothing. It’s just good for itself. Similarly, there are a lot of memes that are, they’re just good for themselves. And in between is where most human culture resides, in a space that I call ‘foible exploiting’.

This is where in a quasi-evolutionary process, all the third of flaws and cracks and lacunas and weak spots, the good enough for government work flaws in our designs, get exploited by agents who often themselves don’t understand why they’re doing it or why this is a good idea. And it’s the arms races that go on in that area that help design and redesign everything from religions to con games to sports and everything else in the world.

The basic memes are words. What are words? Words are memes that can be pronounced. In fact, the big mistake that Richard Dawkins made when he wrote The Selfish Gene was in not stressing the fact that words themselves are the best example of memes. Very few of them are invented by anybody. A few words are coined, but in general words have no authors. They have very clear genealogies, which have been studied for centuries, going back hundreds of years. You take them on, you don’t acquire them by deliberate… you don’t go out and buy them, and you don’t seek them out. They just come to rest in you.

So what it means is that we have to start thinking about our minds as software, because words are virtual machines. Ray Jackendoff, in his wonderful book Foundations of Language, says that words, lexical items, are semi-autonomous informational structures with multiple roles to play in cognition. How do words get installed in a brain? By an implantation process, which is itself somewhat gradual, takes a few hearings. First time a baby hears a word, it’s a sound in a context. Second time, it’s a familiar sound. Sometimes in the familiar, similar contexts. Third hearing, it’s that sound again. Now the context a little clearer, but an auditory anchor has been lodged in the brain. According to Deb Roy’s research, it’s about five or six hearings of a word, and then the kid starts trying to say it. So there’s the gradual establishment and installation of that virtual machine in that kid’s head. It’s like a Java app, but it just takes a little longer to put it in your neck-top.

Remember, children learn two or three words a day from birth to age five. They also learn lots of other memes, words without auditory anchors. (Remember, words are the memes that can be pronounced.) Now, these are selfish memes competing for reproduction in the brain. Every time you say a word to yourself, just let’s talk about words for a moment, you make another copy. That’s a descendant in your head. But they’re also stupid. They’re selfish, but stupid. Murray [Shanahan] was talking about how we solve the frame problem and I want to say that I now want to articulate how this view looks at that.

Instead of having a miraculously intelligent designer that knows where the relevant things are, you have lots of stupid memes. Some of them words, some of them not. They’re all trying to get replicated all the time and they’re competing using coalitions of neurons to do it. They’re competing for influence in the current circumstances and the relevance tuning, the relevance tracking occurs by a Darwinian process without anybody having to understand, without the neural tracks having to understand what they’re doing. Competence with partial comprehension.

One of the things that got me thinking about all this is: I kept running into these neuroscientists saying, “Dopamine is the currency of reward.” Now, there’s something really weird about that. If they really mean currency, I want to know, what do the neurons buy with their dopamine? And then I realized that wasn’t such a stupid question because neurons grow receptors in response to the amount of neuromodulator and neurotransmittor that they get. So there is an important sense in which they are feeding off their neuromodulators. And what they’re buying is time and influence and the capacity to go on making more connections. So there they are groping around in their little neuronic world, trying to maximize, to stay healthy and maximize their dopamine so that they can keep going.

So not just brain plasticity, but also relevant sifting and problem solving is accomplished by bottom-up competition among largely clueless agents. As I mentioned, they’re like Java applets.

Let me do my little exercise. I enjoy this. I think you will too. So, evolution discovered digitalization before we did, before the digital revolution. Here’s a famous example from Oliver Selfridge. What do you see? “THE CAT”.

But if you look closely, you’ll see that the H and the A are exactly the same shape. You with your EVM, your English virtual machine installed, automatically correct to the norm of this. If you weren’t an English speaker, you wouldn’t do it. You see that as ‘the cat’ because you have a built in stupid automatic digitizer of the letters, but it also works and more importantly for speech.

Here’s a little demonstration of that. I want you to repeat after me. You ready? Listen carefully.

Mundify the epigastrium.

Audience: Mundify the epigastrium.

Again.

Audience: Mundify the epigastrium.

One more time.

Audience: Mundify the epigastrium.

Perfect. Anybody know what it means? No, you don’t. You don’t have to know what it means. Now, second case, same deal. Listen carefully.

[inaudible 01:06:06].

Audience: *Laughter*

You can’t do it. The acoustic signal had just as much contour, just as much variation, just as much energy as the first one, but you don’t have a virtual machine for turning that into phonemes. You can’t do it. Phonemes are a sort of user illusion in themselves. Cat, cat, cat. Those are, if you looked at them physically, they’d be all quite different, but you hear them as all the same. Phonemes are the digitization of sound, which makes possible the transmission of semi-understood and not-understood messages. We always talk about how wonderful it is that we can ‘understand’ speech that’s indeed absolutely wonderful. But it’s even more wonderful in a way that we can convey, transmit speech that we don’t understand thanks to the digitalization that we get from phonemes.

This, I submit, is the key evolved innovation. The design feature, which made human culture possible, which made human minds possible. Other species have very rudimentary cultures they transmit. They never go combinatorial, not the chimpanzees, not the whales, not the dolphins, not the elephants. For that, you need the digitizing discipline of language, the best thinking tool of all.

Thanks for your attention.