Ok. Let me try to draw out why optimized stuff is inherently dangerous. This might be a bit meandering.
I think it’s because humans live in an only mildly optimized world. There’s this huge, high dimensional space of the “the way the world can be” with a bunch of parameters including, the force of gravity, the percentage of oxygen in the air, the number of rabbits, the amount of sunlight that reaches the surface of the earth, the virulence of various viruses, etc. Human life is fragile; it depends on the remaining within a relatively narrow “goldilocks” band for a huge number of those parameters.
Optimizing hard on anything, unless it is specifically for maintaining the those goldilocks conditions, implies extremizing. Even the optimization is not itself for an extreme value (eg one could be trying to maintain the oxygen percentage in the air at exactly 21.45600 percent), hitting a value that precisely means doing something substantially different than what the world would otherwise be doing. Hitting a value that precisely means that you have to extremize on some parameter. To get a highly optimized value you have to steer reality into a corner case that is far outside the bounds of the current distribution of outcomes on planet earth.
Indeed, if it isn’t far outside the current distribution of outcomes on planet earth, that suggests that there’s a lot of room left for further optimization. This is because the world is not already optimized on that given parameter, and because the world is so high dimensional it would be staggeringly, exponentially, unlikely that the precisely optimized outcome was within the bounds of the current distribution of outcomes. By default, you should expect that perfect optimization on any given parameter would be a random draw from the state space of all possible ways that earth can be. So if the world looks pretty normal, you haven’t optimized very hard for anything.
That sounds right to me. A key addendum might be that extremizing one value will often extremize (>1) other related values, even those that are normally second-order relations. Eg. a baseball with extremized speed also extremizes the quantity of local radiation. So extremes often don’t stay localized to their domain.
Ok. Let me try to draw out why optimized stuff is inherently dangerous. This might be a bit meandering.
I think it’s because humans live in an only mildly optimized world. There’s this huge, high dimensional space of the “the way the world can be” with a bunch of parameters including, the force of gravity, the percentage of oxygen in the air, the number of rabbits, the amount of sunlight that reaches the surface of the earth, the virulence of various viruses, etc. Human life is fragile; it depends on the remaining within a relatively narrow “goldilocks” band for a huge number of those parameters.
Optimizing hard on anything, unless it is specifically for maintaining the those goldilocks conditions, implies extremizing. Even the optimization is not itself for an extreme value (eg one could be trying to maintain the oxygen percentage in the air at exactly 21.45600 percent), hitting a value that precisely means doing something substantially different than what the world would otherwise be doing. Hitting a value that precisely means that you have to extremize on some parameter. To get a highly optimized value you have to steer reality into a corner case that is far outside the bounds of the current distribution of outcomes on planet earth.
Indeed, if it isn’t far outside the current distribution of outcomes on planet earth, that suggests that there’s a lot of room left for further optimization. This is because the world is not already optimized on that given parameter, and because the world is so high dimensional it would be staggeringly, exponentially, unlikely that the precisely optimized outcome was within the bounds of the current distribution of outcomes. By default, you should expect that perfect optimization on any given parameter would be a random draw from the state space of all possible ways that earth can be. So if the world looks pretty normal, you haven’t optimized very hard for anything.
That sounds right to me. A key addendum might be that extremizing one value will often extremize (>1) other related values, even those that are normally second-order relations. Eg. a baseball with extremized speed also extremizes the quantity of local radiation. So extremes often don’t stay localized to their domain.