The Reality of Emergence

Reply to The Futility of Emergence

In The Futility of Emergence, Eliezer takes an overly critical position on emergence as a theory. In this (short) article, I hope to challenge that view.

Emergence is not an empty phrase. The statements “consciousness is an emergent phenomenon” and “consciousness is a phenomenon” are not the same thing; the former conveys information that the latter does not. When we say something is emergent, we have a well defined concept that we refer to.

From Wikipedia:

emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/​simpler entities do not exhibit.

A is an emergent property of X, means that A arises from X in a way in which it is contingent on the interaction of the constituents of X (and not on those constituents themselves). If A is an emergent property of X, then the constituents of X do not possess A. A comes into existence as categorial novum at the inception of X. The difference between system X and its constituent components in regards to property A is a difference of kind and not of degree; X’s constituents do not possess A in some tiny magnitude—they do not possess A at all.

Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem

This is blatantly not true; size and mass for example are properties of elementary particles.

You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane.

I repectfully disagree.

When we say A is an emergent property of X, we say that X is more than a sum of its parts. Aggregation and amplification of the properties of X’s constituents does not produce the properties of X. The proximate cause of A is not the constituents of X themselves—it is the interaction between those constituents.

Emergence is testable and falsifiable, emergence makes advance predictions; if I say A is an emergent property of system X, then I say that none of the constituent components of system A possess A (in any form or magnitude).

Statement: “consciousness (in humans) is an emergent property of the brain.”

Prediction: “individual neurons are not conscious to any degree.”″

Observing a supposed emergent property in constituent components falsifies the theory of emergence (as far as that theory/​phenomenon is concerned).

The strength of a theory is not what it can predict, but what it can’t. Emergence excludes a lot of things; size and mass are not emergent properties of atoms (elementary physical paticles possess both of them). Any property that the constituents of X possess (even to an astronomically lesser degree) is not emergent. This excludes a whole lot of properties; size, mass, density, electrical charge, etc. In fact, based on my (virtually non-existent knowledge of physics), I suspect that all fundamental and derived quantities are not emergent properties (I once again reiterate that I don’t know physics).

Emergence does not function as a semantic stopsign or curiosity stopper for me. When I say consciousness is emergent, I have provided a skeletal explanation (at the highest abstract levels) of the mechanism of consicousness. I have narrowed my search; I now know that consciousness is not a property of neurons, but arises from the interaction thereof. To use an analogy that I am (somewhat) familiar with, saying a property is emergent, is like saying an algorithm is recursive; we are providing a high level abstract description of both the phenomena and the algorithm. We are conveying (non-trivial) information about both phenomena and algorithm. In the former case, we convey that the property arises as a result of the interaction of the constituent components of a system (and is not reducible to the properties of those constituents). In the latter case, we specify that the algorithm operates by taking as input the output of the algorithm for other instances of the problem (operating on itself). When we say a phenomenon is an emergent property of a system, it is analogous to saying that an algorithm is recursive; you do not have enough information to construct either phenomena or algorithm, but you now know more about both than you did before, and the knowledge you have gained is non-trivial.

Before: Human intelligence is an emergent product of neurons firing. After: Human intelligence is a product of neurons firing.

How about this:

Before: “The quicksort algorithm is a recursive algorithm.”

After: “The quicksort algorithm is an algorithm.”

Before: Human intelligence is an emergent product of neurons firing.

After: Human intelligence is a magical product of neurons firing.

Thi s seems to work just as fine:

Before: “The quicksort algorithm is a recursive algorithm.”

After: “The quicksort algorithm is a magical algorithm.”

Does not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?

It seems clear to me that in both cases, the original statement conveys more information than the edited version. I argue that this is the same for “emergence”; saying a phenomenon is an emergent property does convey useful non-trivial information about that phenomenon.

I shall answer the below question:

If I showed you two conscious beings, one which achieved consciousness through emergence and one that did not, would you be able to tell them apart?

Yes. For the being which achieved consciousness through means other than emergence, I know that the constituents of that being are conscious.

Emergent consciousness: A human brain.

Non-emergent consciousness: A hive mind.

The constituents of the hive mind are by themselves conscious, and I think that’s a useful distinction.

“Emergence” has become very popular, just as saying “magic” used to be very popular. “Emergence” has the same deep appeal to human psychology, for the same reason. “Emergence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is popular because it is the junk food of curiosity. You can explain anything using emergence, and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of “science” but still the same species psychology.

Once again, I disagree with Eliezer. Describing a phenomenon as emergent is (for me) equivalent to describing an algorithm as recursive; merely providing relevant characterisation to distinguish the subject (phenomenon/​algorithm) from other subjects. Emergence is nothing magical to me; when I say consciousness is emergent, I carry no illusions that I now understand consciousness, my curiosity is not sated—but I argue—I am now more knowledgebale than I was before; I now have an abstract conception of the mechanism of consciousness; it is very limited, but it is better than nothing. Telling you quicksort is recursive doesn’t tell you how to implement quicksort, but it does (significantly) constrain your search space; If you were going to run a brute force search of algorithm design space to find quicksort, you now know to confine your search to recursive algorithms. Telling you that quicksort is recursive, brings you closer to understanding quicksort than if you were told it’s just an algorithm. The same is true for saying consciousness is emergent. You know understand more on consciousness than you did before; you now know that it arises categorial novum as a consequence of the interaction of neurons. Describing a phenomenon as “emergent” does not convey zero information, and thus I argue the category is necessary. Emergent is only as futile an explanation as recursion is.

Now that I have (hopefully) established that emergence is a real theory (albeit one with limited explanation power, not unlike describing an algorithm as recursive), I would like to add something else. The above is a defence of the legitimacy of emergence as a theory; I am not of necessity saying that emergence is correct. It may be the case that no property of any system is emergent, and as such all properties of systems are properties of at least one of its constituent components. The question of whether emergence is correct (there exists at least one property of at least one system that is not a property of any of its constituent components (not necessarily consciousness/​intelligence)) is an entirely different question, and is neither the thesis of this write up, nor a question I am currently equipped to tackle. If it is of any relevance, I do believe consciousness is at least a (weakly) emergent property of sapient animal brains.

Part of The Contrarian Sequences