Recognizing Intelligence

Previously in series: Building Something Smarter

Humans in Funny Suits inveighed against the problem of “aliens” on TV shows and movies who think and act like 21st-century middle-class Westerners, even if they have tentacles or exoskeletons. If you were going to seriously ask what real aliens might be like, you would try to make fewer assumptions—a difficult task when the assumptions are invisible.

I previously spoke of how you don’t have to start out by assuming any particular goals, when dealing with an unknown intelligence. You can use some of your evidence to deduce the alien’s goals, and then use that hypothesis to predict the alien’s future achievements, thus making an epistemological profit.

But could you, in principle, recognize an alien intelligence without even hypothesizing anything about its ultimate ends—anything about the terminal values it’s trying to achieve?

This sounds like it goes against my suggested definition of intelligence, or even optimization. How can you recognize something as having a strong ability to hit narrow targets in a large search space, if you have no idea what the target is?

And yet, intuitively, it seems easy to imagine a scenario in which we could recognize an alien’s intelligence while having no concept whatsoever of its terminal values—having no idea where it’s trying to steer the future.

Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands. Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish? Can I guess that the machine’s makers were intelligent, without guessing their motivations?

And again, it seems like in an intuitive sense I should obviously be able to do so. I look at the cables running through the machine, and find large electrical currents passing through them, and discover that the material is a flexible high-temperature high-amperage superconductor. Dozens of gears whir rapidly, perfectly meshed...

I have no idea what the machine is doing. I don’t even have a hypothesis as to what it’s doing. Yet I have recognized the machine as the product of an alien intelligence. Doesn’t this show that “optimization process” is not an indispensable notion to “intelligence”?

But you can’t possibly recognize intelligence without at least having such a thing as a concept of “intelligence” that divides the universe into intelligent and unintelligent parts. For there to be a concept, there has to be a boundary. So what am I recognizing?

If I don’t see any optimization criterion by which to judge the parts or the whole—so that, as far as I know, a random volume of air molecules or a clump of dirt would be just as good a design—then why am I focusing on this particular object and saying, “Here is a machine”? Why not say the same about a cloud or a rainstorm?

Why is it a good hypothesis to suppose that intelligence or any other optimization process played a role in selecting the form of what I see, any more than it is a good hypothesis to suppose that the dust particles in my rooms are arranged by dust elves?

Consider that gleaming chrome. Why did humans start making things out of metal? Because metal is hard; it retains its shape for a long time. So when you try to do something, and the something stays the same for a long period of time, the way-to-do-it may also stay the same for a long period of time. So you face the subproblem of creating things that keep their form and function. Metal is one solution to that subproblem.

There are no-free-lunch theorems showing the impossibility of various kinds of inference, in maximally disordered universes. In the same sense, if an alien’s goals were maximally disordered, it would be unable to achieve those goals and you would be unable to detect their achievement.

But as simple a form of negentropy as regularity over time—that the alien’s terminal values don’t take on a new random form with each clock tick—can imply that hard metal, or some other durable substance, would be useful in a “machine”—a persistent configuration of material that helps promote a persistent goal.

The gears are a solution to the problem of transmitting mechanical forces from one place to another, which you would want to do because of the presumed economy of scale in generating the mechanical force at a central location and then distributing it. In their meshing, we recognize a force of optimization applied in the service of a recognizable instrumental value: most random gears, or random shapes turning against each other, would fail to mesh, or fly apart. Without knowing what the mechanical forces are meant to do, we recognize something that transmits mechanical force—this is why gears appear in many human artifacts, because it doesn’t matter much what kind of mechanical force you need to transmit on the other end. You may still face problems like trading torque for speed, or moving mechanical force from generators to appliers.

These are not universally convergent instrumental challenges. They probably aren’t even convergent with respect to maximum-entropy goal systems (which are mostly out of luck).

But relative to the space of low-entropy, highly regular goal systems—goal systems that don’t pick a new utility function for every different time and every different place—that negentropy pours through the notion of “optimization” and comes out as a concentrated probability distribution over what an “alien intelligence” would do, even in the “absence of any hypothesis” about its goals.

Because the “absence of any hypothesis”, in this case, does not correspond to a maxentropy distribution, but rather an ordered prior that is ready to recognize any structure that it sees. If you see the aliens making cheesecakes over and over and over again, in many different places and times, you are ready to say “the aliens like cheesecake” rather than “my, what a coincidence”. Even in the absence of any notion of what the aliens are doing—whether they’re making cheesecakes or paperclips or eudaimonic sentient beings—this low-entropy prior itself can pour through the notion of “optimization” and be transformed into a recognition of solved instrumental problems.

If you truly expected no order of an alien mind’s goals—if you did not credit even the structured prior that lets you recognize order when you see it—then you would be unable to identify any optimization or any intelligence. Every possible configuration of matter would appear equally probable as “something the mind might design”, from desk dust to rainstorms. Just another hypothesis of maximum entropy.

This doesn’t mean that there’s some particular identifiable thing that all alien minds want. It doesn’t mean that a mind, “by definition”, doesn’t change its goals over time. Just that if there were an “agent” whose goals were pure snow on a television screen, its acts would be the same.

Like thermodynamics, cognition is about flows of order. An ordered outcome needs negentropy to fuel it. Likewise, where we expect or recognize a thing, even so lofty and abstract as “intelligence”, we must have ordered beliefs to fuel our anticipation. It’s all part of the great game, Follow-the-Negentropy.