A nuclear bomb steers a lot of far-away objects into a high-entropy configuration, and does so very robustly, but that perhaps is not a “small part of the state space”
This example reminds me of a thing I have been thinking about, namely that it seems like optimization can only occur in cases where the optimization produces/is granted enough “energy” to control the level below. In this example, the model works in a quite literal way, as a nuclear bomb floods an area with energy, and I think this example generalizes to e.g. markets with Dutch disease.
Flooding the lower level with “energy” is presumably not the only way this problem can occur; lack of incentives/credit assignment in the upper level generates this result simply because no incentives means that the upper level does not allocate “energy” to the area.
Yeah I think you’re right. I have the sense that the pure algorithmic account of optimization—that optimization is about algorithms that do search over plans using models derived from evidence to evaluate each plan’s merit—doesn’t quite account for what an optimizer really is in the physical world.
The thing is that I can implement some very general-purpose modelling + plan-search algorithm on my computer (for example, monte carlo versions of AIXI) and hook it up to real sensors and actuators and it will not do anything interesting much at all. It’s too slow and unreflective to really work.
Therefore, an object running a consequentialist computation is definitely not a sufficient condition for remote control as per John’s conjecture, but perhaps it is a necessary condition—that’s what the OP is asking for a proof or disproof of.
This example reminds me of a thing I have been thinking about, namely that it seems like optimization can only occur in cases where the optimization produces/is granted enough “energy” to control the level below. In this example, the model works in a quite literal way, as a nuclear bomb floods an area with energy, and I think this example generalizes to e.g. markets with Dutch disease.
Flooding the lower level with “energy” is presumably not the only way this problem can occur; lack of incentives/credit assignment in the upper level generates this result simply because no incentives means that the upper level does not allocate “energy” to the area.
Yeah I think you’re right. I have the sense that the pure algorithmic account of optimization—that optimization is about algorithms that do search over plans using models derived from evidence to evaluate each plan’s merit—doesn’t quite account for what an optimizer really is in the physical world.
The thing is that I can implement some very general-purpose modelling + plan-search algorithm on my computer (for example, monte carlo versions of AIXI) and hook it up to real sensors and actuators and it will not do anything interesting much at all. It’s too slow and unreflective to really work.
Therefore, an object running a consequentialist computation is definitely not a sufficient condition for remote control as per John’s conjecture, but perhaps it is a necessary condition—that’s what the OP is asking for a proof or disproof of.