Measuring Optimization Power

Previously in series: Aiming at the Target

Yesterday I spoke of how “When I think you’re a powerful intelligence, and I think I know something about your preferences, then I’ll predict that you’ll steer reality into regions that are higher in your preference ordering.”

You can quantify this, at least in theory, supposing you have (A) the agent or optimization process’s preference ordering, and (B) a measure of the space of outcomes—which, for discrete outcomes in a finite space of possibilities, could just consist of counting them—then you can quantify how small a target is being hit, within how large a greater region.

Then we count the total number of states with equal or greater rank in the preference ordering to the outcome achieved, or integrate over the measure of states with equal or greater rank. Dividing this by the total size of the space gives you the relative smallness of the target—did you hit an outcome that was one in a million? One in a trillion?

Actually, most optimization processes produce “surprises” that are exponentially more improbable than this—you’d need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare. So we take the log base two of the reciprocal of the improbability, and that gives us optimization power in bits.

This figure—roughly, the improbability of an “equally preferred” outcome being produced by a random selection from the space (or measure on the space) - forms the foundation of my Bayesian view of intelligence, or to be precise, optimization power. It has many subtleties:

(1) The wise will recognize that we are calculating the entropy of something. We could take the figure of the relative improbability of “equally good or better” outcomes, and call this the negentropy of the system relative to a preference ordering. Unlike thermodynamic entropy, the entropy of a system relative to a preference ordering can easily decrease (that is, the negentropy can increase, that is, things can get better over time relative to a preference ordering).

Suppose e.g. that a single switch will determine whether the world is saved or destroyed, and you don’t know whether the switch is set to 1 or 0. You can carry out an operation that coerces the switch to 1; in accordance with the second law of thermodynamics, this requires you to dump one bit of entropy somewhere, e.g. by radiating a single photon of waste heat into the void. But you don’t care about that photon—it’s not alive, it’s not sentient, it doesn’t hurt—whereas you care a very great deal about the switch.

For some odd reason, I had the above insight while watching X TV. (Those of you who’ve seen it know why this is funny.)

Taking physical entropy out of propositional variables that you care about—coercing them from unoptimized states into an optimized states—and dumping the entropy into residual variables that you don’t care about, means that relative to your preference ordering, the total “entropy” of the universe goes down. This is pretty much what life is all about.

We care more about the variables we plan to alter, than we care about the waste heat emitted by our brains. If this were not the case—if our preferences didn’t neatly compartmentalize the universe into cared-for propositional variables and everything else—then the second law of thermodynamics would prohibit us from ever doing optimization. Just like there are no-free-lunch theorems showing that cognition is impossible in a maxentropy universe, optimization will prove futile if you have maxentropy preferences. Having maximally disordered preferences over an ordered universe is pretty much the same dilemma as the reverse.

(2) The quantity we’re measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities. To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around. This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.

Ah, but how do you know a mind’s preference ordering? Suppose you flip a coin 30 times and it comes up with some random-looking string—how do you know this wasn’t because a mind wanted it to produce that string?

This, in turn, is reminiscent of the Minimum Message Length formulation of Occam’s Razor: if you send me a message telling me what a mind wants and how powerful it is, then this should enable you to compress your description of future events and observations, so that the total message is shorter. Otherwise there is no predictive benefit to viewing a system as an optimization process. This criterion tells us when to take the intentional stance.

(3) Actually, you need to fit another criterion to take the intentional stance—there can’t be a better description that averts the need to talk about optimization. This is an epistemic criterion more than a physical one—a sufficiently powerful mind might have no need to take the intentional stance toward a human, because it could just model the regularity of our brains like moving parts in a machine.

(4) If you have a coin that always comes up heads, there’s no need to say “The coin always wants to come up heads” because you can just say “the coin always comes up heads”. Optimization will beat alternative mechanical explanations when our ability to perturb a system defeats our ability to predict its interim steps in detail, but not our ability to predict a narrow final outcome. (Again, note that this is an epistemic criterion.)

(5) Suppose you believe a mind exists, but you don’t know its preferences? Then you use some of your evidence to infer the mind’s preference ordering, and then use the inferred preferences to infer the mind’s power, then use those two beliefs to testably predict future outcomes. The total gain in predictive accuracy should exceed the complexity-cost of supposing that “there’s a mind of unknown preferences around”, the initial hypothesis.

Similarly, if you’re not sure whether there’s an optimizer around, some of your evidence-fuel burns to support the hypothesis that there’s an optimizer around, some of your evidence is expended to infer its target, and some of your evidence is expended to infer its power. The rest of the evidence should be well explained, or better yet predicted in advance, by this inferred data: this is your revenue on the transaction, which should exceed the costs just incurred, making an epistemic profit.

(6) If you presume that you know (from a superior epistemic vantage point) the probabilistic consequences of an action or plan, or if you measure the consequences repeatedly, and if you know or infer a utility function rather than just a preference ordering, then you might be able to talk about the degree of optimization of an action or plan rather than just the negentropy of a final outcome. We talk about the degree to which a plan has “improbably” high expected utility, relative to a measure over the space of all possible plans.

(7) A similar presumption that we can measure the instrumental value of a device, relative to a terminal utility function, lets us talk about a Toyota Corolla as an “optimized” physical object, even though we attach little terminal value to it per se.

(8) If you’re a human yourself and you take the measure of a problem, then there may be “obvious” solutions that don’t count for much in your view, even though the solution might be very hard for a chimpanzee to find, or a snail. Roughly, because your own mind is efficient enough to calculate the solution without an apparent expenditure of internal effort, a solution that good will seem to have high probability, and so an equally good solution will not seem very improbable.

By presuming a base level of intelligence, we measure the improbability of a solution that “would take us some effort”, rather than the improbability of the same solution emerging from a random noise generator. This is one reason why many people say things like “There has been no progress in AI; machines still aren’t intelligent at all.” There are legitimate abilities that modern algorithms entirely lack, but mostly what they’re seeing is that AI is “dumber than a village idiot”—it doesn’t even do as well as the “obvious” solutions that get most of the human’s intuitive measure, let alone surprisingly better than that; it seems anti-intelligent, stupid.

To measure the impressiveness of a solution to a human, you’ve got to do a few things that are a bit more complicated than just measuring optimization power. For example, if a human sees an obvious computer program to compute many solutions, they will measure the total impressiveness of all the solutions as being no more than the impressiveness of writing the computer program—but from the internal perspective of the computer program, it might seem to be making a metaphorical effort on each additional occasion. From the perspective of Deep Blue’s programmers, Deep Blue is a one-time optimization cost; from Deep Blue’s perspective it has to optimize each chess game individually.

To measure human impressiveness you have to talk quite a bit about humans—how humans compact the search space, the meta-level on which humans approach a problem. People who try to take human impressiveness as their primitive measure will run into difficulties, because in fact the measure is not very primitive.

(9) For the vast majority of real-world problems we will not be able to calculate exact optimization powers, any more than we can do actual Bayesian updating over all hypotheses, or actual expected utility maximization in our planning. But, just like Bayesian updating or expected utility maximization, the notion of optimization power does give us a gold standard against which to measure—a simple mathematical idea of what we are trying to do whenever we essay more complicated and efficient algorithms.

(10) “Intelligence” is efficient cross-domain optimization.