Intelligence Explosion vs. Co-operative Explosion

Abstract: In the FOOM debate, Eliezer emphasizes ‘optimization power’, something like intelligence, as the main thing that makes both evolution and humans so powerful. A different choice of abstractions says that the main thing that’s been giving various organisms—from single-celled creatures to wasps to humans—an advantage is the capability to form superorganisms, thus reaping the gains of specialization and shifting evolutionary selection pressure to the level of the superorganism. There seem to be several ways by which a technological singularity could involve the creation of new kinds of superorganisms, which would then reap benefits above and beyond those that individual humans can achieve, and which would quite likely have quite different values. This strongly suggests that even if one is not worried about the intelligence explosion (because of e.g. finding a hard takeoff improbable), one should still be worried about the co-operative explosion.

After watching Jonathan Haidt’s excellent new TEDTalk yesterday, I bought his latest book, The Righteous Mind: Why Good People Are Divided by Politics and Religion. At one point, Haidt has a discussion of evolutionary superorganisms—cases where previously separate organisms have joined together into a single superorganism, shifting evolution’s selection pressure to operate on the level of the superorganism and avoiding the usual pitfalls that block group selection (excerpts below). With an increased ability for the previously-separate organisms to co-operate, these new superorganisms can often out-compete simpler organisms.

Suppose you entered a boat race. One hundred rowers, each in a separate rowboat, set out on a ten-mile race along a wide and slow-moving river. The first to cross the finish line will win $10,000. Halfway into the race, you’re in the lead. But then, from out of nowhere, you’re passed by a boat with two rowers, each pulling just one oar. No fair! Two rowers joined together into one boat! And then, stranger still, you watch as that rowboat is overtaken by a train of three such rowboats, all tied together to form a single long boat. The rowers are identical septuplets. Six of them row in perfect synchrony while the seventh is the coxswain, steering the boat and calling out the beat for the rowers. But those cheaters are deprived of victory just before they cross the finish line, for they in turn are passed by an enterprising group of twenty-four sisters who rented a motorboat. It turns out that there are no rules in this race about what kinds of vehicles are allowed.

That was a metaphorical history of life on Earth. For the first billion years or so of life, the only organisms were prokaryotic cells (such as bacteria). Each was a solo operation, competing with others and reproducing copies of itself. But then, around 2 billion years ago, two bacteria somehow joined together inside a single membrane, which explains why mitochondria have their own DNA, unrelated to the DNA in the nucleus. These are the two-person rowboats in my example. Cells that had internal organelles could reap the benefits of cooperation and the division of labor (see Adam Smith). There was no longer any competition between these organelles, for they could reproduce only when the entire cell reproduced, so it was “one for all, all for one.” Life on Earth underwent what biologists call a “major transition.” Natural selection went on as it always had, but now there was a radically new kind of creature to be selected. There was a new kind of vehicle by which selfish genes could replicate themselves. Single-celled eukaryotes were wildly successful and spread throughout the oceans.

A few hundred million years later, some of these eukaryotes developed a novel adaptation: they stayed together after cell division to form multicellular organisms in which every cell had exactly the same genes. These are the three-boat septuplets in my example. Once again, competition is suppressed (because each cell can only reproduce if the organism reproduces, via its sperm or egg cells). A group of cells becomes an individual, able to divide labor among the cells (which specialize into limbs and organs). A powerful new kind of vehicle appears, and in a short span of time the world is covered with plants, animals, and fungi. It’s another major transition.

Major transitions are rare. The biologists John Maynard Smith and Eörs Szathmáry count just eight clear examples over the last 4 billion years (the last of which is human societies). But these transitions are among the most important events in biological history, and they are examples of multilevel selection at work. It’s the same story over and over again: Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms. (A superorganism is an organism made out of smaller organisms.) As these superorganisms proliferate, they begin to compete with each other, and to evolve for greater success in that competition. This competition among superorganisms is one form of group selection. There is variation among the groups, and the fittest groups pass on their traits to future generations of groups.

Major transitions may be rare, but when they happen, the Earth often changes. Just look at what happened more than 100 million years ago when some wasps developed the trick of dividing labor between a queen (who lays all the eggs) and several kinds of workers who maintain the nest and bring back food to share. This trick was discovered by the early hymenoptera (members of the order that includes wasps, which gave rise to bees and ants) and it was discovered independently several dozen other times (by the ancestors of termites, naked mole rats, and some species of shrimp, aphids, beetles, and spiders). In each case, the free rider problem was surmounted and selfish genes began to craft relatively selfless group members who together constituted a supremely selfish group.

These groups were a new kind of vehicle: a hive or colony of close genetic relatives, which functioned as a unit (e.g., in foraging and fighting) and reproduced as a unit. These are the motorboating sisters in my example, taking advantage of technological innovations and mechanical engineering that had never before existed. It was another transition. Another kind of group began to function as though it were a single organism, and the genes that got to ride around in colonies crushed the genes that couldn’t “get it together” and rode around in the bodies of more selfish and solitary insects. The colonial insects represent just 2 percent of all insect species, but in a short period of time they claimed the best feeding and breeding sites for themselves, pushed their competitors to marginal grounds, and changed most of the Earth’s terrestrial ecosystems (for
example, by enabling the evolution of flowering plants, which need pollinators). Now they’re the majority, by weight, of all insects on Earth.

Haidt’s argument is that color politics and other political mind-killingness are due to a set of adaptations that temporarily lets people merge into a superorganism and set individual interest aside. To a lesser extent, so are moral intuitions about things such as fairness and proportionality. Yes, it’s a group selection argument. Haidt acknowledges that group selection has been unpopular in biology for a while, but notes that it has also been making a comeback recently, and cites e.g. the work on multi-level selection as supporting his thesis. I mention some of his references (which I have not yet read) below.

Anyway, the reason why I’m bringing this up is that I’ve been re-reading the FOOM debate of late, and in Life’s Story Continues, Eliezer references some of the same evolutionary milestones as Haidt does. And while Eliezer also mentions that the cells provided a major co-operative advantage that allowed for specialization, he views this merely through the lens of optimization power, and dismisses e.g. unicellular eukaryotes with the words “meh, so what”.

Cells: Force a set of genes, RNA strands, or catalytic chemicals to share a common reproductive fate. (This is the real point of the cell boundary, not “protection from the environment”—it keeps the fruits of chemical labor inside a spatial boundary.) But, as we’ve defined our abstractions, this is mostly a matter of optimization slope—the quality of the search neighborhood. The advent of cells opens up a tremendously rich new neighborhood defined by specialization and division of labor. It also increases the slope by ensuring that chemicals get to keep the fruits of their own labor in a spatial boundary, so that fitness advantages increase. But does it hit back to the meta-level? How you define that seems to me like a matter of taste. Cells don’t quite change the mutate-reproduce-select cycle. But if we’re going to define sexual recombination as a meta-level innovation, then we should also define cellular isolation as a meta-level innovation. (Life’s Story Continues)

The interesting thing about the FOOM debate is that both Eliezer and Robin seem to talk a lot about the significance of co-operation, but they never quite take it up explicitly. Robin talks about the way that isolated groups typically aren’t able to take over the world, because it’s much more effective to co-operate with others than try to do everything yourself, or because information within the group tends to leak out to other parties. Eliezer talks about the way that cells allowed the ability for specialization, and how writing allowed human culture to accumulate and people to build on each other’s inventions.

Even as Eliezer talks about intelligence, insight, and recursion, one could view this too as discussion about the power of specialization, co-operation and superorganisms—for intelligence seems to consist of a large number of specialized modules, all somehow merged to work in the same organism. And Robin seems to take the view of large groups of people acting as some kind of a loose superorganism, thus beating smaller groups that try to do things alone:

Independent competitors can more easily displace each another than interdependent ones. For example, since the unit of the industrial revolution seems to have been Western Europe, Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders. So as the world becomes interdependent on larger scales, smaller groups find it harder to displace others. (Outside View of Singularity)

[Today] innovations and advances in each part of the world depending on advances made in all other parts of the world. … Visions of a local singularity, in contrast, imagine that sudden technological advances in one small group essentially allow that group to suddenly grow big enough to take over everything. … The key common assumption is that of a very powerful but autonomous area of technology. Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. …

[Consider also] complaints about the great specialization in modern academic and intellectual life. People complain that ordinary folks should know more science, so they can judge simple science arguments for themselves. … Many want policy debates to focus on intrinsic merits, rather than on appeals to authority. Many people wish students would study a wider range of subjects, and so be better able to see the big picture. And they wish researchers weren’t so penalized for working between disciplines, or for failing to cite every last paper someone might think is related somehow.

It seems to me plausible to attribute all of these dreams of autarky to people not yet coming fully to terms with our newly heightened interdependence. … We picture our ideal political unit and future home to be the largely self-sufficient small tribe of our evolutionary heritage. … I suspect that future software, manufacturing plants, and colonies will typically be much more dependent on everyone else than dreams of autonomy imagine. Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former. The riches that come from a worldwide division of labor have rightly seduced us away from many of our dreams of autarky. We may fantasize about dropping out of the rat race and living a life of ease on some tropical island. But very few of us ever do. (Dreams of Autarky)

Robin has also explicitly made the point that it is the difficulty of co-operation which suggests that we can keep ourselves safe from uploads or AIs with hostile intentions:

What if uploads decide to take over by force, refusing to pay back their loans and grabbing other forms of capital? Well for comparison, consider the question: What if our children take over, refusing to pay back their student loans or to pay for Social Security? Or consider: What if short people revolt tonight, and kill all the tall people?

In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such revolt is rare in practice; short people know that if they kill all the tall folks tonight, all the blond people might go next week, and who knows where it would all end? And short people are highly integrated into society; some of their best friends are tall people.

In contrast, violence is more common between geographic and culturally separated subgroups. Neighboring nations have gone to war, ethnic minorities have revolted against governments run by other ethnicities, and slaves and other sharply segregated economic classes have rebelled.

Thus the best way to keep the peace with uploads would be to allow them as full as possible integration in with the rest of society. Let them live and work with ordinary people, and let them loan and sell to each other through the same institutions they use to deal with ordinary humans. Banning uploads into space, the seas, or the attic so as not to shock other folks might be ill-advised. Imposing especially heavy upload taxes, or treating uploads as property, as just software someone owns or as non-human slaves like dogs, might be especially unwise. (If Uploads Come First)

Situations like war or violent rebellions are, arguably, cases where the “human superorganism adaptations” kick in the strongest—where people have the strongest propensity to view themselves primarily as a part of a group, and where they are the most ready to sacrifice themselves for the interest of the group. Indeed, Haidt quotes (both in the book and the TEDTalk) former soldiers who say that there’s something very unique in the states of consciousness that war can produce:

So many books about war say the same thing, that nothing brings people together like war. And that bringing them together opens up the possibility of extraordinary self-transcendent experiences. I’m going to play for you an excerpt from this book by Glenn Gray. Gray was a soldier in the American army in World War II. And after the war he interviewed a lot of other soldiers and wrote about the experience of men in battle. Here’s a key passage where he basically describes the staircase.

Glenn Gray: Many veterans will admit that the experience of communal effort in battle has been the high point of their lives. “I” passes insensibly into a “we,” “my” becomes “our” and individual faith loses its central importance. I believe that it is nothing less than the assurance of immortality that makes self-sacrifice at these moments so relatively easy. I may fall, but I do not die, for that which is real in me goes forward and lives on in the comrades for whom I gave up my life.

So Robin, in If Uploads Come First, seems to basically be saying that uploads are dangerous if we let them become superorganisms. Usually, individuals have a large number of their own worries and priorities, and even if they did have much to gain by co-operating, they can’t trust each other enough nor avoid the temptation to free-ride enough to really work together well enough to become dangerous.

Incidentally, this provides an easy rebuttal to the “corporations are already superintelligent” claim—while corporations have a variety of mechanisms for trying to provide their employees with the proper incentives, anyone who’s worked for a big company knows that they employees tend to follow their own interests, even when they conflict with those of the company. It’s certainly nothing like the situation with a cell, where the survival of each cell organ depends on the survival of the whole cell. If the cell dies, the cell organs die; if the company fails, the employees can just get a new job.

It would seem to me that, whatever your take on the intelligence explosion is, the current evolutionary history would strongly suggest that new kinds of superorganisms—larger, more cohesive than human groups, and less dependent on crippling their own rationality in order to maintain group cohesion—would be a major risk for humanity. This is not to say that an intelligence explosion wouldn’t be dangerous as well—I have no idea what a mind that could think 1,000 times faster than me could do—but a co-operative explosion should be considered dangerous even if you thought a hard takeoff via recursive self-improvement (say) was impossible. And many of the ways for creating a superorganism (see below) seem to involve processes that could conceivably lead to the superorganisms having quite different values from humans. Even if no single superorganism could take over, that’s not much of a comfort for the ordinary humans who are caught in a crossfire.

How might a co-operative explosion happen? I see at least three possibilities:

  • Self-copying artificial intelligences. An AI doesn’t need to have the evolved idea of a “self” whose interests need to be protected, above those of identical copies of the AI. An AI could be programmed to only care about the completion of a single goal (e.g. paperclips), and it could then copy itself freely, knowing that all of those copies will be working towards the same goal.

  • Upload copy clans. Carl Shulman discusses this possibility in Whole Brain Emulation and the Evolution of Superorganisms. Some people might have a view about personal identity which accepts the possibility of somebody deleting you, if there exist close-enough copies of you. In a world where uploading is possible, there could be people who could copy themselves and then have those copies work together in order to further the goals of the joint organism. If the copies were willing to have themselves deleted or be experimented on, they could come up with ways of brain modification that further increased the devotion to the superorganism. Furthermore, each copy could consent to being deleted if it seemed like its interests were drifting apart from those of the organism as a whole.

  • Mind coalescences. In Coalescing Minds: Mind Uploading-Related Group Mind Scenarios, I and Harri Valpola discuss the notion of coalesced minds, hypothetical minds created by merging together two brains through a sufficient number of high-bandwidth neural connections. In a world where uploading was possible, the creation of mind coalescences could be relatively straightforward. Then, several independent organisms could literally join together to become a single entity.

Below are some more excerpts from Haidt’s book:

Many animals are social: they live in groups, flocks, or herds. But only a few animals have crossed the threshold and become ultrasocial, which means that they live in very large groups that have some internal structure, enabling them to reap the benefits of the division of labor. Beehives and ant nests, with their separate castes of soldiers, scouts, and nursery attendants, are examples of ultrasociality, and so are human societies.

One of the key features that has helped all the nonhuman ultra-socials to cross over appears to be the need to defend a shared nest. [...] Hölldobler and Wilson give supporting roles to two other factors: the need to feed offspring over an extended period (which gives an advantage to species that can recruit siblings or males to help out Mom) and intergroup conflict. All three of these factors applied to those first early wasps camped out together in defensible naturally occurring nests (such as holes in trees). From that point on, the most cooperative groups got to keep the best nesting sites, which they then modified in increasingly elaborate ways to make themselves even more productive and more protected. Their descendants include the honeybees we know today, whose hives have been described as “a factory inside a fortress.”

Those same three factors applied to human beings. Like bees, our ancestors were (1) territorial creatures with a fondness for defensible nests (such as caves) who (2) gave birth to needy offspring that required enormous amounts of care, which had to be given while (3) the group was under threat from neighboring groups. For hundreds of thousands of years, therefore, conditions were in place that pulled for the evolution of ultrasociality, and as a result, we are the only ultrasocial primate. The human lineage may have started off acting very much like chimps,48 but by the time our ancestors started walking out of Africa, they had become at least a little bit like bees.

And much later, when some groups began planting crops and orchards, and then building granaries, storage sheds, fenced pastures, and permanent homes, they had an even steadier food supply that had to be defended even more vigorously. Like bees, humans began building ever more elaborate nests, and in just a few thousand years, a new kind of vehicle appeared on Earth—the city-state, able to raise walls and armies. City-states and, later, empires spread rapidly across Eurasia, North Africa, and Mesoamerica, changing many of the Earth’s ecosystems and allowing the total tonnage of human beings to shoot up from insignificance at the start of the Holocene (around twelve thousand years ago) to world domination today.

As the colonial insects did to the other insects, we have pushed all other mammals to the margins, to extinction, or to servitude. The analogy to bees is not shallow or loose. Despite their many differences, human civilizations and beehives are both products of major transitions in evolutionary history. They are motorboats.

The discovery of major transitions is Exhibit A in the retrial of group selection. Group selection may or may not be common among other animals, but it happens whenever individuals find ways to suppress selfishness and work as a
team, in competition with other teams. Group selection creates group-related adaptations. It is not far-fetched, and it should not be a heresy to suggest that this is how we got the groupish overlay that makes up a crucial part of our righteous minds. [...]

According to Tomasello, human cognition veered away from that of other primates when our ancestors developed shared intentionality. At some point in the last million years, a small group of our ancestors developed the ability to share mental representations of tasks that two or more of them were pursuing together. For example, while foraging, one person pulls down a branch while the other plucks the fruit, and they both share the meal. Chimps never do this. Or while hunting, the pair splits up to approach an animal from both sides. Chimps sometimes appear to do this, as in the widely reported cases of chimps hunting colobus monkeys, but Tomasello argues that the chimps are not really working together. Rather, each chimp is surveying the scene and then taking the action that seems best to him at that moment. Tomasello notes that these monkey hunts are the only time that chimps seem to be working together, yet even in these rare cases they fail to show the signs of real cooperation. They make no effort to communicate with each other, for example, and they are terrible at sharing the spoils among the hunters, each of whom must use force to obtain a share of meat at the end. They all chase the monkey at the same time, yet they don’t all seem to be on the same page about the hunt.

In contrast, when early humans began to share intentions, their ability to hunt, gather, raise children, and raid their neighbors increased exponentially. Everyone on the team now had a mental representation of the task, knew that his or her partners shared the same representation, knew when a partner had acted in a way that impeded success or that hogged the spoils, and reacted negatively to such violations. When everyone in a group began to share a common understanding of how things were supposed to be done, and then felt a flash of negativity when any individual violated those expectations, the first moral matrix was born. (Remember that a matrix is a consensual hallucination.) That, I believe, was our Rubicon crossing.

Tomasello believes that human ultrasociality arose in two steps. The first was the ability to share intentions in groups of two or three people who were actively hunting or foraging together. (That was the Rubicon.) Then, after several hundred thousand years of evolution for better sharing and collaboration as nomadic hunter-gatherers, more collaborative groups began to get larger, perhaps in response to the threat of other groups. Victory went to the most cohesive groups—the ones that could scale up their ability to share intentions from three people to three hundred or three thousand people. This was the second step: Natural selection favored increasing levels of what Tomasello calls “group-mindedness”—the ability to learn and conform to social norms, feel and share group-related emotions, and, ultimately, to create and obey social institutions, including religion. A new set of selection pressures operated within groups (e.g., nonconformists were punished, or at very least were less likely to be chosen as partners for joint ventures) as well as between groups (cohesive groups took territory and other resources from less cohesive groups).

Shared intentionality is Exhibit B in the retrial of group selection. Once you grasp Tomasello’s deep insight, you begin to see the vast webs of shared intentionality out of which human groups are constructed. Many people assume that language was our Rubicon, but language became possible only after our ancestors got shared intentionality. Tomasello notes that a word is not a relationship between a sound and an object. It is an agreement among people who share a joint representation of the things in their world, and who share a set of conventions for communicating with each other about those things. If the key to group selection is a shared defensible nest, then shared intentionality allowed humans to construct nests that were vast and ornate yet weightless and portable. Bees construct hives out of wax and wood fibers, which they then fight, kill, and die to defend. Humans construct moral communities out of shared norms, institutions, and gods that, even in the twenty-first century, they fight, kill, and die to defend.

Haidt’s references on this include, though are not limited to, the following:

Okasha, S. (2006) Evolution and the Levels of Selection. Oxford: Oxford University Press.

Hölldobler, B., and E. O. Wilson. (2009) The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies. New York: Norton.

Bourke, A. F. G. (2011) Principles of Social Evolution. New York: Oxford University Press.

Wilson, E. O., and B. Hölldobler. (2005) “Eusociality: Origin and Consequences.” Proceedings of the National Academy of Sciences of the United States of America 102:13367–71.

Tomasello, M., A. Melis, C. Tennie, E. Wyman, E. Herrmann, and A. Schneider. (Forthcoming) “Two Key Steps in the Evolution of Human Cooperation: The Mutualism Hypothesis.” Current Anthropology.