Three Fallacies of Teleology

Followup to: Anthropomorphic Optimism

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as “cause”, though Wikipedia suggests that a better translation is “maker”. Aristotle’s theory of the Four Causes, then, might be better translated as the Four Makers. These were his four senses of aitia: The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze. The formal aition is the substance’s form, its statue-shaped-ness. The efficient aition best translates as the English word “cause”; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order. In this, Aristotle followed in the path of Plato, who had earlier written:

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause. It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it. That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid. As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force...

Suppose that you translate “final aition” as “final cause”, and assert directly:

“Why do human teeth develop with such regularity, into a structure well-formed for biting and chewing? You could try to explain this as an incidental fact, but think of how unlikely that would be. Clearly, the final cause of teeth is the act of biting and chewing. Teeth develop with regularity, because of the act of biting and chewing—the latter causes the former.”

A modern-day sophisticated Bayesian will at once remark, “This requires me to draw a circular causal diagram with an arrow going from the future to the past.”

It’s not clear to me to what extent Aristotle appreciated this point—that you could not draw causal arrows from the future to the past. Aristotle did acknowledge that teeth also needed an efficient cause to develop. But Aristotle may have believed that the efficient cause could not act without the telos, or was directed by the telos, in which case we again have a reversed direction of causality, a dependency of the past on the future. I am no scholar of the classics, so it may be only myself who is ignorant of what Aristotle believed on this score.

So the first way in which teleological reasoning may be an outright fallacy, is when an arrow is drawn directly from the future to the past. In every case where a present event seems to happen for the sake of a future end, that future end must be materially represented in the past.

Suppose you’re driving to the supermarket, and you say that each right turn and left turn happens for the sake of the future event of your being at the supermarket. Then the actual efficient cause of the turn, consists of: the representation in your mind of the event of yourself arriving at the supermarket; your mental representation of the street map (not the streets themselves); your brain’s planning mechanism that searches for a plan that represents arrival at the supermarket; and the nerves that translate this plan into the motor action of your hands turning the steering wheel.

All these things exist in the past or present; no arrow is drawn from the future to the past.

In biology, similarly, we explain the regular formation of teeth, not by letting it be caused directly by the future act of chewing, but by using the theory of natural selection to relate past events of chewing to the organism’s current genetic makeup, which physically controls the formation of the teeth. Thus, we account for the current regularity of the teeth by referring only to past and present events, never to future events. Such evolutionary reasoning is called “teleonomy”, in contrast with teleology.

We can see that the efficient cause is primary, not the final cause, by considering what happens when the two come into conflict. The efficient cause of human taste buds is natural selection on past human eating habits; the final cause of human taste buds is acquiring nutrition. From the efficient cause, we should expect human taste buds to seek out resources that were scarce in the ancestral environment, like fat and sugar. From the final cause, we would expect human taste buds to seek out resources scarce in the current environment, like vitamins and fiber. From the sales numbers on candy bars, we can see which wins. The saying “Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers” asserts the primacy of teleonomy over teleology.

Similarly, if you have a mistake in your mind about where the supermarket lies, the final event of your arrival at the supermarket, will not reach backward in time to steer your car. If I know your exact state of mind, I will be able to predict your car’s trajectory by modeling your current state of mind, not by supposing that the car is attracted to some particular final destination. If I know your mind in detail, I can even predict your mistakes, regardless of what you think is your goal.

The efficient cause has screened off the telos: If I can model the complete mechanisms at work in the present, I never have to take into account the future in predicting the next time step.

So that is the first fallacy of teleology—to make the future a literal cause of the past.

Now admittedly, it may be convenient to engage in reasoning that would be fallacious if interpreted literally. For example:

I don’t know the exact state of Mary’s every neuron. But I know that she desires to be at the supermarket. If Mary turns left at the next intersection, she will then be at the supermarket (at time t=1). Therefore Mary will turn left (at time t=0).

But this is only convenient shortcut, to let the future affect Mary’s present actions. More rigorous reasoning would say:

My model predicts that if Mary turns left she will arrive at the supermarket. I don’t know her every neuron, but I believe Mary has a model similar to mine. I believe Mary desires to be at the supermarket. I believe that Mary has a planning mechanism similar to mine, which leads her to take actions that her model predicts will lead to the fulfillment of her desires. Therefore I predict that Mary will turn left.

No direct mention of the actual future has been made. I predict Mary by imagining myself to have her goals, then putting myself and my planning mechanisms into her shoes, letting my brain do planning-work that is similar to the planning-work I expect Mary to do. This requires me to talk only about Mary’s goal, our models (presumed similar) and our planning mechanisms (presumed similar) - all forces active in the present.

And the benefit of this more rigorous reasoning, is that if Mary is mistaken about the supermarket’s location, then I do not have to suppose that the future event of her arrival reaches back and steers her correctly anyway.

Teleological reasoning is anthropomorphic—it uses your own brain as a black box to predict external events. Specifically, teleology uses your brain’s planning mechanism as a black box to predict a chain of future events, by planning backward from a distant outcome.

Now we are talking about a highly generalized form of anthropomorphism—and indeed, it is precisely to introduce this generalization that I am talking about teleology! You know what it’s like to feel purposeful. But when someone says, “water runs downhill so that it will be at the bottom”, you don’t necessarily imagine little sentient rivulets alive with quiet determination. Nonetheless, when you ask, “How could the water get to the bottom of the hill?” and plot out a course down the hillside, you’re recruiting your own brain’s planning mechanisms to do it. That’s what the brain’s planner does, after all: it finds a path to a specified destination starting from the present.

And if you expect the water to avoid local maxima so it can get all the way to the bottom of the hill—to avoid being trapped in small puddles far above the ground—then your anthropomorphism is going to produce the wrong prediction. (This is how a lot of mistaken evolutionary reasoning gets done, since evolution has no foresight, and only takes the next greedy local step.)

But consider the subtlety: you may have produced a wrong, anthropomorphic prediction of the water without ever thinking of it as a person—without ever visualizing it as having feelings—without even thinking “the water has purpose” or “the water wants to be at the bottom of the hill”—but only saying, as Aristotle did, “the water’s telos is to be closer to the center of the Earth”. Or maybe just, “the water runs downhill so that it will be at the bottom”. (Or, “I expect that human taste buds will take into account how much of each nutrient the body needs, and so reject fat and sugar if there are enough calories present, since evolution produced taste buds in order to acquire nutrients.”)

You don’t notice instinctively when you’re using an aspect of your brain as a black box to predict outside events. Consequentialism just seems like an ordinary property of the world, something even rocks could do.

It takes a deliberate act of reductionism to say: “But the water has no brain; how can it predict ahead to see itself being trapped in a local puddle, when the future cannot directly affect the past? How indeed can anything at all happen in the water so that it will, in the future, be at the bottom? No; I should try to understand the water’s behavior using only local causes, found in the immediate past.”

It takes a deliberate act of reductionism to identify telos as purpose, and purpose as a mental property which is too complicated to be ontologically fundamental. You don’t realize, when you ask “What does this telos-imbued object do next?”, that your brain is answering by calling on its own complicated planning mechanisms, that search multiple paths and do means-end reasoning. Purpose just seems like a simple and basic property; the complexity of your brain that produces the predictions is hidden from you. It is an act of reductionism to see purpose as requiring a complicated AI algorithm that needs a complicated material embodiment.

So this is the second fallacy of teleology—to attribute goal-directed behavior to things that are not goal-directed, perhaps without even thinking of the things as alive and spirit-inhabited, but only thinking, X happens in order to Y. “In order to” is mentalistic language, even though it doesn’t seem to name a blatantly mental property like “fearful” or “thinks it can fly”.

Remember the sequence on free will? The problem, it turned out, was that “could” was a mentalistic property—generated by the planner in the course of labeling states as reachable from the start state. It seemed like “could” was a physical, ontological property. When you say “could” it doesn’t sound like you’re talking about states of mind. Nonetheless, the mysterious behavior of could-ness turned out to be understandable only by looking at the brain’s planning mechanisms.

Since mentalistic reasoning uses your own mind as a black box to generate its predictions, it very commonly generates wrong questions and mysterious answers.

If you want to accomplish anything related to philosophy, or anything related to Artificial Intelligence, it is necessary to learn to identify mentalistic language and root it all out—which can only be done by analyzing innocent-seeming words like “could” or “in order to” into the complex cognitive algorithms that are their true identities.

(If anyone accuses me of “extreme reductionism” for saying this, let me ask how likely it is that we live in an only partially reductionist universe.)

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system. Indeed, one does this every time one speaks of the purpose of an event, rather than speaking of some particular agent desiring the consequences of that event.

I suspect this is why people have trouble understanding evolutionary psychology—in particular, why they suppose that all human acts are unconsciously directed toward reproduction. “Mothers who loved their children outreproduced those who left their children to the wolves” becomes “natural selection produced motherly love in order to ensure the survival of the species” becomes “the purpose of acts of motherly love is to increase the mother’s fitness”. Well, if a mother apparently drags her child off the train tracks because she loves the child, that’s also the purpose of the act, right? So by a fallacy of compression—a mental model that has one bucket where two buckets are needed—the purpose must be one or the other: either love or reproductive fitness.

Similarly with those who hear of evolutionary psychology and conclude that the meaning of life is to increase reproductive fitness—hasn’t science demonstrated that this is the purpose of all biological organisms, after all?

Likewise with that fellow who concluded that the purpose of the universe is to increase entropy—the universe does so consistently, therefore it must want to do so—and that this must therefore be the meaning of life. Pretty sad purpose, I’d say! But of course the speaker did not seem to realize what it means to want to increase entropy as much as possible—what this goal really implies, that you should go around collapsing stars to black holes. Instead the one focused on a few selected activities that increase entropy, like thinking. You couldn’t ask for a clearer illustration of a fake utility function.

I call this a “teleological capture”—where someone comes to believe that the telos of X is Y, relative to some agent, or optimization process, or maybe just statistical tendency, from which it follows that any human or other agent who does X must have a purpose of Y in mind. The evolutionary reason for motherly love becomes its telos, and seems to “capture” the apparent motives of human mothers. The game-theoretical reason for cooperating on the Iterated Prisoner’s Dilemma becomes the telos of cooperation, and seems to “capture” the apparent motives of human altruists, who are thus revealed as being selfish after all. Charity increases status, which people are known to desire; therefore status is the telos of charity, and “captures” all claims to kinder motives. Etc. etc. through half of all amateur philosophical reasoning about the meaning of life.

These then are three fallacies of teleology: Backward causality, anthropomorphism, and teleological capture.