I think your disagreement can be made clear with more formalism. First, the point for your opponents:
When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there’s also a more concrete correspondence to this: they have also been selected for “IF cold long fur, ELSE short fur” the entire time. Notice especially that there are animals actually implementing this dependent property—it can be evolved just fine, in the same way as the simple properties. And in fact, you could “unroll” the concept of IGF into a humongous environment-dependent strategy, which would then always be selected for, because all the environment-dependence is already baked in.
Now on the other hand, if you train an AI first on one thing, and then on another, wouldn’t we expect it to get worse at the first again? Indeed, we would also expect a species living in the cold for very long to lose those adaptations relevant to the heat. The reason for this in both cases are, broadly speaking, limits and penalties to complexity. I’m not sure very many people would have bought the argument in the previous paragraph—we all know unused genetic code decays over time. But in the behavioral/cognitive version with intentionally maximizing IGF that makes it easy to ignore the problems, we’re not used to remembering the physical correlates of thinking. Of course, a dragonfly couldn’t explicitly maximize IGF, because its brain is to small to even understand what that is, and developing that brain has demands for space and energy incompatible with the general dragonfly life strategy. The costs of cognition are also part of the demands of fitness, and the dragonfly is more fit the way it is, and similarly I think a human explicitly maximizing IGF would have done worse for most of our evolution[1] because the odds you get something wrong are just too high with current expenditure on cognition, better to hardcode some right answers..
I don’t share your optimistic conclusion however. Because the part about selecting for multiple things simultanuously, that’s true. You are always selecting for everything thats locally extensionally equivalent to the intended selection criteria. There is not a move you could have done in evolutions place, to actually select for IGF instead of [various particular things], this already is what happens when you select for IGF, because it’s the complexity, rather than different intent, that lead to the different result[2]. Similarly, reinforcement learning for human values will result is whatever is the simplest[3] way to match human values on the training data.
I don’t think you’ve tried to come up with what that different move might look like for evolution, but it’s strongly implied they exist for both it and the AI situation.
I think your disagreement can be made clear with more formalism. First, the point for your opponents:
When the animals are in a cold place, they are selected for a long fur coat, and also for IGF, (and other things as well). To some extent, these are just different ways of describing the same process. Now, if they move to a warmer place, they are now selected for a shorter fur instead, and they are still selected for IGF. And there’s also a more concrete correspondence to this: they have also been selected for “IF cold long fur, ELSE short fur” the entire time. Notice especially that there are animals actually implementing this dependent property—it can be evolved just fine, in the same way as the simple properties. And in fact, you could “unroll” the concept of IGF into a humongous environment-dependent strategy, which would then always be selected for, because all the environment-dependence is already baked in.
Now on the other hand, if you train an AI first on one thing, and then on another, wouldn’t we expect it to get worse at the first again? Indeed, we would also expect a species living in the cold for very long to lose those adaptations relevant to the heat. The reason for this in both cases are, broadly speaking, limits and penalties to complexity. I’m not sure very many people would have bought the argument in the previous paragraph—we all know unused genetic code decays over time. But in the behavioral/cognitive version with intentionally maximizing IGF that makes it easy to ignore the problems, we’re not used to remembering the physical correlates of thinking. Of course, a dragonfly couldn’t explicitly maximize IGF, because its brain is to small to even understand what that is, and developing that brain has demands for space and energy incompatible with the general dragonfly life strategy. The costs of cognition are also part of the demands of fitness, and the dragonfly is more fit the way it is, and similarly I think a human explicitly maximizing IGF would have done worse for most of our evolution[1] because the odds you get something wrong are just too high with current expenditure on cognition, better to hardcode some right answers..
I don’t share your optimistic conclusion however. Because the part about selecting for multiple things simultanuously, that’s true. You are always selecting for everything thats locally extensionally equivalent to the intended selection criteria. There is not a move you could have done in evolutions place, to actually select for IGF instead of [various particular things], this already is what happens when you select for IGF, because it’s the complexity, rather than different intent, that lead to the different result[2]. Similarly, reinforcement learning for human values will result is whatever is the simplest[3] way to match human values on the training data.
and even today, still might if sperm donations et al weren’t possible
I don’t think you’ve tried to come up with what that different move might look like for evolution, but it’s strongly implied they exist for both it and the AI situation.
in the sense of that architecture