Hot Air Doesn’t Disagree

Followup to: The Bedrock of Morality, Abstracted Idealized Dynamics

Tim Tyler comments:

Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox’s stomach. They may argue passionately about the rabbit’s fate—and even stoop to violence.

Boy, you know, when you think about it, Nature turns out to be just full of disagreement.

Rocks, for example, fall down—so they agree with us, who also fall when pushed off a cliff—whereas hot air rises into the air, unlike humans.

I wonder why hot air disagrees with us so dramatically. I wonder what sort of moral justifications it might have for behaving as it does; and how long it will take to argue this out. So far, hot air has not been forthcoming in terms of moral justifications.

Physical systems that behave differently from you usually do not have factual or moral disagreements with you. Only a highly specialized subset of systems, when they do something different from you, should lead you to infer their explicit internal representation of moral arguments that could potentially lead you to change your mind about what you should do.

Attributing moral disagreements to rabbits or foxes is sheer anthropomorphism, in the full technical sense of the term—like supposing that lightning bolts are thrown by thunder gods, or that trees have spirits that can be insulted by human sexual practices and lead them to withhold their fruit.

The rabbit does not think it should be eating grass. If questioned the rabbit will not say, “I enjoy eating grass, and it is good in general for agents to do what they enjoy, therefore I should eat grass.” Now you might invent an argument like that; but the rabbit’s actual behavior has absolutely no causal connection to any cognitive system that processes such arguments. The fact that the rabbit eats grass, should not lead you to infer the explicit cognitive representation of, nor even infer the probable theoretical existence of, the sort of arguments that humans have over what they should do. The rabbit is just eating grass, like a rock rolls downhill and like hot air rises.

To think that the rabbit contains a little circuit that ponders morality and then finally does what it thinks it should do, and that the rabbit has arrived at the belief that it should eat grass, and that this is the explanation of why the rabbit is eating grass—from which we might infer that, if the rabbit is correct, perhaps humans should do the same thing—this is all as ridiculous as thinking that the rock wants to be at the bottom of the hill, concludes that it can reach the bottom of the hill by rolling, and therefore decides to exert a mysterious motive force on itself. Aristotle thought that, but there is a reason why Aristotelians don’t teach modern physics courses.

The fox does not argue that it is smarter than the rabbit and so deserves to live at the rabbit’s expense. To think that the fox is moralizing about why it should eat the rabbit, and this is why the fox eats the rabbit—from which we might infer that we as humans, hearing the fox out, would see its arguments as being in direct conflict with those of the rabbit, and we would have to judge between them—this is as ridiculous as thinking (as a modern human being) that lightning bolts are thrown by thunder gods in a state of inferrable anger.

Yes, foxes and rabbits are more complex creatures than rocks and hot air, but they do not process moral arguments. They are not that complex in that particular way.

Foxes try to eat rabbits and rabbits try to escape foxes, and from this there is nothing more to be inferred than from rocks falling and hot air rising, or water quenching fire and fire evaporating water. They are not arguing.

This anthropomorphism of presuming that every system does what it does because of a belief about what it should do, is directly responsible for the belief that Pebblesorters create prime-numbered heaps of pebbles because they think that is what everyone should do. They don’t. Systems whose behavior indicates something about what agents should do, are rare, and the Pebblesorters are not such systems. They don’t care about sentient life at all. They just sort pebbles into prime-numbered heaps.