Shane, this is an advanced topic; it would be covered under the topic of trying to compute the degree of optimization of the optimizer, and the topic of choosing a measure on the state space.
First, if you look at parts of the problem in a particular order according to your search process, that’s somewhat like having a measure that gives large chunks of mass to the first options you search. If you were looking for your keys, then, all else being equal, you would search first in the places where you thought you were most likely to find your keys (or the easiest places to check, probability divided by cost, but forget that for the moment) - so there’s something like a measure, in this case a probability measure, that corresponds to where you look first. Think of turning it the other way around, and saying that the points of largest measure correspond to the first places you search, whether because the solution is most likely to be there, or because the cost is lowest. These are the solutions we call “obvious” or “straightforward” as if they had high probability.
Second, suppose you were smarter than you are now and a better programmer, transhumanly so. Then for you, creating a chess program like Deep Blue (or one of the modern more efficient programs) might be as easy as computing the Fibonacci sequence. But the chess program would still be just as powerful as Deep Blue. It would be just as powerful an optimizer. Only to you, it would seem like an “obvious solution” so you wouldn’t give it much credit, any more than you credit gradient descent on a problem with a global minimum—though that might seem much harder to Archimedes than to you; the Newton-Raphson method was a brilliant innovation, once upon a time.
If you see a way to solve an optimization problem using a very simple program, then it will seem to you like the difficulty of the problem is only the difficulty of writing that program. But it may be wiser to draw a distinction between the object level and the meta level. Kasparov exerted continuous power to win multiple chess games. The programmers of Deep Blue exerted a constant amount of effort to build it, and then they could win as many chess games as they liked by pressing a button. It is a mistake to compare the effort exerted by Kasparov to the effort exerted by the programmers; you should compare Kasparov to Deep Blue, and say that Deep Blue was a more powerful optimizer than Kasparov. The programmers you would only compare to natural selection, say, and maybe you should include in that the economy behind them that built the computing hardware.
But this just goes to show that what we consider difficulty isn’t always the same as object-level optimization power. Once the programmers built Deep Blue, it would have been just a press of the button for them to turn Deep Blue on, but when Deep Blue was running, it would still have been exerting optimization power. And you don’t find it difficult to regulate your body’s breathing and heartbeat and other properties, but you’ve got a whole medulla and any number of gene regulatory networks contributing to a continuous optimization of your body. So what we perceive as difficulty is not the same as optimization-power-in-the-world—that’s more a function of what humans consider obvious or effortless, versus what they have to think about and examine multiple options in order to do.
We could also describe the optimizer in less concrete and more probabilistic terms, so that if the environment is not certain, the optimizer has to obtain its end under multiple conditions. Indeed, if this is not the case, then we might as well model the system by thinking in terms of a single linear chain of cause and effect, which would not arrive at the same destination if perturbed anywhere along its way—so then there is no point in describing the system as having a goal.
We could say that optimization isn’t really interesting until it has to cross mutiple domains or unknown domains, the way we consider human intelligence and natural selection as more interesting optimizations than beavers building a dam. These may also be reasons why you feel that simple problems don’t reflect much difficulty, or that the kind of optimization performed isn’t commensurate with the work your intelligence perceives as “work’.
Even so, I would maintain the view of an optimization process as something that squeezes the future into a particular region, across a range of starting conditions, so that it’s simpler to understand the destination than the pathway. Even if the program that does this seems really straightforward to a human AI researcher, the program itself is still squeezing the future—it’s working even if you aren’t. Or maybe you want to substitute a different measure on the state space than the equiprobable one—but at that point you’re bringing your own intelligence into the problem. There’s a lot of problems that look simple to humans, but it isn’t always easy to make an AI solve them.
Shane, this is an advanced topic; it would be covered under the topic of trying to compute the degree of optimization of the optimizer, and the topic of choosing a measure on the state space.
First, if you look at parts of the problem in a particular order according to your search process, that’s somewhat like having a measure that gives large chunks of mass to the first options you search. If you were looking for your keys, then, all else being equal, you would search first in the places where you thought you were most likely to find your keys (or the easiest places to check, probability divided by cost, but forget that for the moment) - so there’s something like a measure, in this case a probability measure, that corresponds to where you look first. Think of turning it the other way around, and saying that the points of largest measure correspond to the first places you search, whether because the solution is most likely to be there, or because the cost is lowest. These are the solutions we call “obvious” or “straightforward” as if they had high probability.
Second, suppose you were smarter than you are now and a better programmer, transhumanly so. Then for you, creating a chess program like Deep Blue (or one of the modern more efficient programs) might be as easy as computing the Fibonacci sequence. But the chess program would still be just as powerful as Deep Blue. It would be just as powerful an optimizer. Only to you, it would seem like an “obvious solution” so you wouldn’t give it much credit, any more than you credit gradient descent on a problem with a global minimum—though that might seem much harder to Archimedes than to you; the Newton-Raphson method was a brilliant innovation, once upon a time.
If you see a way to solve an optimization problem using a very simple program, then it will seem to you like the difficulty of the problem is only the difficulty of writing that program. But it may be wiser to draw a distinction between the object level and the meta level. Kasparov exerted continuous power to win multiple chess games. The programmers of Deep Blue exerted a constant amount of effort to build it, and then they could win as many chess games as they liked by pressing a button. It is a mistake to compare the effort exerted by Kasparov to the effort exerted by the programmers; you should compare Kasparov to Deep Blue, and say that Deep Blue was a more powerful optimizer than Kasparov. The programmers you would only compare to natural selection, say, and maybe you should include in that the economy behind them that built the computing hardware.
But this just goes to show that what we consider difficulty isn’t always the same as object-level optimization power. Once the programmers built Deep Blue, it would have been just a press of the button for them to turn Deep Blue on, but when Deep Blue was running, it would still have been exerting optimization power. And you don’t find it difficult to regulate your body’s breathing and heartbeat and other properties, but you’ve got a whole medulla and any number of gene regulatory networks contributing to a continuous optimization of your body. So what we perceive as difficulty is not the same as optimization-power-in-the-world—that’s more a function of what humans consider obvious or effortless, versus what they have to think about and examine multiple options in order to do.
We could also describe the optimizer in less concrete and more probabilistic terms, so that if the environment is not certain, the optimizer has to obtain its end under multiple conditions. Indeed, if this is not the case, then we might as well model the system by thinking in terms of a single linear chain of cause and effect, which would not arrive at the same destination if perturbed anywhere along its way—so then there is no point in describing the system as having a goal.
We could say that optimization isn’t really interesting until it has to cross mutiple domains or unknown domains, the way we consider human intelligence and natural selection as more interesting optimizations than beavers building a dam. These may also be reasons why you feel that simple problems don’t reflect much difficulty, or that the kind of optimization performed isn’t commensurate with the work your intelligence perceives as “work’.
Even so, I would maintain the view of an optimization process as something that squeezes the future into a particular region, across a range of starting conditions, so that it’s simpler to understand the destination than the pathway. Even if the program that does this seems really straightforward to a human AI researcher, the program itself is still squeezing the future—it’s working even if you aren’t. Or maybe you want to substitute a different measure on the state space than the equiprobable one—but at that point you’re bringing your own intelligence into the problem. There’s a lot of problems that look simple to humans, but it isn’t always easy to make an AI solve them.