On the nature of purpose

[cross-posted from my blog]

Introduction

Is the concept of purposes, and more generally teleological accounts of behaviour, to be banished from the field of biology?

For many years—essentially since the idea of Darwinian natural selection has started to be properly understood and integrated into the intellectual fabric of the field -, the consensus answer to this questions among biology scholars was “yes”. Much more recently, however, interest in this question has re-sparked—notably driven by voices that contradict that former consensus.

This is the context in which this letter exchange between the philosophers Alex Rosenberg and Daniel Dennett is taking place. What is the nature of “purposes”? Are they real? But mostly, what would it even mean for them to be?

In the following, I will provide a summary and discussion of what I consider the key points and lines of disagreements between the two. Quotes, if not specified otherwise, are taken from the letter exchange.

Rosenberg’s crux

Rosenberg and Dennett agree on large parts of their respective worldviews. They both share a “disenchanted” naturalist’s view—they believe that reality is (nothing but) causal and (in principle) explainable. They subscribe to the narrative of reductionism which acclaims how scientific progress emancipated, first, the world of physics, and later the chemical and biological one, from metaphysical beliefs. Through Darwin, we have come to understand the fundamental drivers of life as we know it—variation and natural selection.

But despite their shared epistemic foundation, Rosenberg suspects a fundamental difference in their views concerning the nature of purpose. Rosenberg—contrary to Dennett—sees a necessity for science (and scientists) to disabuse themselves, entirely, from any anthropocentric speech of purpose and meaning. Anyone who considers the use of the “intentional stance” as justified, so Rosenberg, would have to reconcile the following:

What is the mechanism by which Darwinian natural selection turns reasons (tracked by the individual as purpose, meaning, beliefs and intentions) into causes (affecting the material world)?

Rosenberg, of course, doesn’t deny that humans—what he refers to as Gregorian creatures shaped by biological as well as cultural evolution—experience higher-level properties like emotions, intentions and meaning. Wilfrid Sellars calls this the “manifest image”: the framework in terms of which we ordinarily perceive and make sense of ourselves and the world. [1] But Rosenberg sees a tension between the scientific and the manifest image—one that is, to his eyes, irreconcilable.

“Darwinism is the only game in town”, so Rosenberg. Everything can, and ought to be, explained in terms of it. These higher-level properties—sweetness, cuteness, sexiness, funniness, colour, solidity, weight (not mass!) - are radically illusionary. Darwin’s account of natural selection doesn’t explain purpose, it explains it away. Just like physics and biology, so do cognitive sciences and psychology now have to become disabused from the “intentional stance”.

In other words, it’s the recalcitrance of meaning that bothers Rosenberg—the fact that we appear to need it in how we make sense of the world, while also being unable to properly integrate it in our scientific understanding.

As Quine put it: “One may accept the Brentano thesis [about the nature of intentionality] as either showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention.” [2]

Rosenberg is compelled by the latter path. In his view, the recalcitrance of meaning is “the last bastion of resistance to the scientific world view. Science can do without them, in fact, it must do without them in its description of reality.” He doesn’t claim that notions of meaning have never been useful, but that they have “outlived their usefulness”, replaced, today, with better tools of scientific inquiry.

As I understand it, Rosenberg argues that purposes aren’t real because they aren’t tied up with reality, unable to affect the physical world. Acting as if they were real (by relying on the concept to explain observations) is contributing to confusion and convoluted thinking. We ought, instead, to resort to the classical Darwinian explanations, where all behaviour boils down to evolutionary advantages and procreation (in a way that explains purpose away).

Rosenberg’s crux (or rather, my interpretation thereof) is that, if you want to claim that purposes are real—if you want to maintain purpose as a scientifically justified concept, one that is reconcilable with science -, you need to be able to account for how reasons turn into causes.


Perfectly real illusions

While Dennett recognized the challenges presented by Rosenberg, he refuses to be troubled by them. Dennett paints a possible “third path” to Quine’s puzzle by suggesting to understand the manifest image (i.e. mental properties, qualia) neither as “as real as physics” (thereby making it incomprehensible to science) nor as “radically illusionary” (thereby troubling our self-understanding as Gregorian creatures). Instead, Dennett suggests, we can understand it as a user-illusion: “ways of being informed about things that matter to us in the world (our affordances) because of the way we and the environment we live in (microphysically [3]) are.”

I suggest that this is, in essence, a deeply pragmatic account. (What account other than pragmatism, really, could utter, with the same ethos, a sentence like: “These are perfectly real illusions!”)

While not explicitly saying such, we can interpret Dennett as invoking the bounded nature of human minds and their perceptual capacity. Mental representations, while not representing reality fully truthfully (e.g. there is no microphysical account of colours, just photons), they also aren’t arbitrary. They are issued (in part) from reality, and through compression inherent to the mind’s cognitive processes, these representations get distorted such as to form false, yet in all likelihood useful, illusions.

These representations are useful because they have evolved to be such: after all, it is through the interaction with the causal world that the Darwinian fitness of an agent is determined; whether we live or die, procreate or fail to do so. Our ability to perceive has been shaped by evolution to track reality (i.e. to be truthful), but only exactly to the extent that this gives us a fitness advantage (i.e. is useful). Our perceptions are neither completely unrestrained nor completely constrained by reality, and therefore they are neither entirely arbitrary nor entirely accurate.

Let’s talk about the nature of patterns for a moment. Patterns are critical to how intelligent creatures make sense of and navigate the world. They allow (what would otherwise be far too much) data to be compressed, while still granting predictive power. But are patterns real? Patterns directly stem from reality—they are to be found in reality—and, in this very sense, they are real. But, if there wasn’t anyone or anything to perceive and make use of this structural property of the real world, it wouldn’t be meaningful to talk of patterns. Reality doesn’t care about patterns. Observers/​agents do.

This same reasoning can be applied to intentions. Intentions are meaningful patterns in the world. An observer with limited resources who wants to make sense of the world (i.e. an agent that wants to reduce sample complexity) can abstract along the dimension of “intentionality” to reliably get good predictions about the world. (Except, “abstracting along the dimension of intentionality” isn’t an active choice of the observer, rather than something that emerges because intentions are a meaningful pattern.) The “intentionality-based” prediction does well at ignoring variables that aren’t sufficiently predictive and capturing the ones that are, which is critical in the context of a bounded agent.

Another point in case: affordances. In the preface to his book Surfing Uncertainty, Andy Clark writes : “ [...] different (but densely interanimanted) neural populations learn to predict various organism-salient regularities pertaining at many spatial and temporal scales. [...] The world is thus revealed as a world tailored to human needs, tasks and actions. It is a world built of affordances—opportunities for action and intervention.“ Just like patterns, the world isn’t made up of affordances. And yet, they are real in the sense of what Dennett calls user-illusions.


The cryptographer’s constraint

Dennett goes on to endow these illusionary reasons with further “realness” by invoking the cryptographer’s constraint:

It is extremely hard—practically infeasible—to design an even minimally complex system for the code of which there exists more than one reasonable decryption/​translation.

Dennett uses a simple crossword puzzle to illustrate the idea: “Consider a crossword puzzle that has two different solutions, and in which there is no fact that settles which is the “correct” solution. The composer of the crossword went to great pains to devise a puzzle with two solutions. [...] If making a simple crossword puzzle with two solutions is difficult, imagine how difficult it would be to take the whole corpus of human utterances in all languages and come up with a pair of equally good versions of Google Translate that disagreed!” [slight edits to improve readability]

The practical consequence of the constraint is that, “if you can find one reasonable decryption of a cipher-text, you’ve found the decryption.” Furthermore, this constraint is a general property of all forms of encryption/​decryption.

Let’s look at the sentence: “Give me a peppermint candy!”

Given the cryptographer’s constraint, there are, practically speaking, very (read: astronomically) few plausible interpretations of the words “peppermint”, “candy”, etc. This is at the heart of what makes meaning non-arbitrary and language reliable.

To add a bit of nuance: the fact that the concept “peppermint” reliably translates to the same meaning across minds requires iterated interactions. In other words, Dennett doesn’t claim that, if I just now came up with an entirely new concept (say “klup”), its meaning would immediately be unambiguously clear. But its meaning (across minds) would become increasingly more precise and robust after using it for some time, and—on evolutionary time horizons—we can be preeetty sure we mean (to all practical relevance) the same things by the words we use.

But what does all this have to do with the question of whether purpose is real? Here we go:

The cryptographer’s constraint—which I will henceforth refer to as the principle of pragmatic reliability [4]- is an essential puzzle piece to understanding what allows representations of reasons (e.g. a sentence making a claim) to turn into causes (e.g. a human taking a certain action because of that claim).

We are thus starting to get closer to Rosenberg’s crux as stated above: a scientific account for how reasons become causes. There is one more leap to take.

***

Reasons-turning-causes

Having invoked the role of pragmatic reliability, let’s examine another pillar of Dennett’s view—one that will eventually get us all the way to addressing Rosenberg’s crux.

Rosenberg says: “I see how we represent in public language, turning inscriptions and noises into symbols. I don’t see how, prior to us and our language, mother nature (a.k.a Darwinian natural selection) did it.”

What Rosenberg conceives to be an insurmountable challenge to Dennett’s view, the latter prefers to walk around rather than over, figuratively speaking. As developed at length in his book From Bacteria to Bach and Back, Dennett suggests that “mother nature didn’t represent reasons at all”, nor did it need to.

First, the mechanism of natural selection uncovers what Dennett calls “free-floating rationales”—reasons that existed billions of years before and independent of reasoners. Only when the tree of life grew a particular (and so far unique) branch—humans, together with their use of language -, these reasons start to get represented.

“We humans are the first reason representers on the planet and maybe in the universe. Free-floating rationales are not represented anywhere, not in the mind of God, or Mother Nature, and not in the organisms who benefit from all the arrangements of their bodies for which there are good reasons. Reasons don’t have to be representations; representations of reasons do.”

This is to say: it isn’t exactly the reasons, so much as their representations, that become causes.

Reasons-turning-causes, so Dennett, is unique to humans because only humans represent reasons. I would nuance that the capacity to represent lives on a spectrum rather than a binary. Some other animals seem to be able to do something like representation, too. [5] That said, humans remain unchallenged in the degree two which they have developed the capacity to represent (among the forms of life we are currently aware of).

“Bears have a good reason to hibernate, but they don’t represent that reason, any more than trees represent their reason for growing tall: to get more sunlight than the competition.” While there are rational explanations for the bear’s or the tree’s behaviour, they don’t understand, think about or represent these reasons. The rationale has been discovered by natural selection, but the bear/​tree doesn’t know—nor does it need to—why it wants to stay in their dens during winter.

Language plays a critical role in this entire representation-jazz. Language is instrumental to our ability to represent; whether as necessary precursor, mediator or (ex-post) manifestation of that ability remains a controversial question among philosophers of language. Less controversial, however, is the role of language in allowing us to externalize representations of reasons, thereby “creating causes” not only for ourselves but also for people around us. Wilfrid Sellars suggested that language bore what he calls “the space of reasons”—the space of argumentation, explanation, query and persuasion. [6] In other words, language bore the space in which reasons can become causes.

We can even go a step further: while acknowledging the role of natural selection in shaping what we are—the fact that the purposes of our genes are determined by natural selection -, we are still free to make our own choices. To put it differently: “Humans create the purposes they are subject to; we are not subject to purposes created by something external to us.” [7]

In From Darwin to Derrida: Selfish Genes, Social Selves, and the Meanings of Life, David Haigh argues for this point of view by suggesting that there does not need be full concordance, nor congruity, between our psychological motivations (e.g. wanting to engage in sexual activity because it is pleasurable, wanting to eat a certain food because it is tasty) and the reasons why we have those motivations (e.g. in order to pass on our genetic material).

There is a piece of folk wisdom that goes: “the meaning of life is the meaning we give it”. Based on what has been discussed in this essay, we can see this saying in a different, more scientific light: as a testimony of the fact that we humans are creatures that represent meaning, and by doing so we turn “free-floating rationales” into causes that govern our own.

Thanks to particlemania, Kyle Scott and Romeo Stevens for useful discussions and comments on earlier drafts of this post.

***

[1] Sellars, Wilfrid. “Philosophy and the scientific image of man.” Science, perception and reality 2 (1963).
Also see: deVries, Willem. “Wilfrid Sellars”, The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). Retrieved from: https://​​plato.stanford.edu/​​archives/​​fall2020/​​entries/​​sellars/​​

[2] Quine, Willard Van Orman. “Word and Object. New Edition.” MIT Press (1960).

[3] I.e. the physical world at the level of atoms

[4] AI safety relevant side note: The idea that translations of meaning need only be sufficiently reliable in order to be reliably useful might provide an interesting avenue for AI safety research.

Language works, evidenced by the striking success of human civilisations made possible through advanced coordination which in return requires advanced communication. (Sure, humans miscommunicate what feels like a whole lot, but in the bigger scheme of things, we still appear to be pretty damn good at this communication thing.)

Notably, language works without there being theoretically air-tight proofs that map meanings on words.

Right there, we have an empirical case study of a symbolic system that functions on a (merely) pragmatically reliable regime. We can use it to inform our priors on how well this regime might work in other systems, such as AI, and how and why it tends to fail.

One might argue that a pragmatically reliable alignment isn’t enough—not given the sheer optimization power of the systems we are talking about. Maybe that is true; maybe we do need more certainty than pragmatism can provide. Nevertheless, I believe that there are sufficient reasons for why this is an avenue worth exploring further.

Personally, I am most interested in this line of thinking from an AI ecosystems/​CAIS point of view, and as a way of addressing (what I consider a major challenge) the problem of the transient and contextual nature of preferences.

[5] People wanting to think about this more might be interesting in looking into vocal (production) learning—the ability to “modify acoustic and syntactic sounds, acquire new sounds via imitation, and produce vocalizations”. This conversation might be a good starting point.

[6] Sellars, Wilfrid. In the space of reasons: Selected essays of Wilfrid Sellars. Harvard University Press (2007).

[7] Quoted from: https://​​twitter.com/​​ironick/​​status/​​1324778875763773448