I may need to think more about the relative advantages and disadvantages of each framing, but I don’t think either is outright wrong.
I agree it’s not wrong. I’m claiming it’s not a useful framing. If we must use this framing, I think humans and evolution are not remotely comparable on how good they are at long-term optimization, and I can’t understand why you think they are. (Humans may not be good at long-term optimization on some absolute scale, but they’re a hell of a lot better than evolution.)
I think in my example you could make a similar argument: looking at outcomes, you could say “Rohin is always optimizing for learning abstract algebra, and he has now become very good at abstract algebra.” It’s not wrong, it’s just not useful for predicting my future behavior, and doesn’t seem to carve reality at its joints.
(Tbc, I think this example is overstating the case, “evolution is always optimizing for fitness” is definitely more reasonable and more predictive than “Rohin is always optimizing for learning abstract algebra”.)
I really do think that the best thing is to just strip away agency, and talk about selection:
the argument is that evolution was not selecting for proto-culture / intelligence, whereas humans will select for proto-culture / intelligence
Re: usefulness:
Yes, I meant useful for reproductive fitness.
Suppose a specific monkey has some mutation and gets a little bit of proto-culture. Are you claiming that this will increase the number of children that monkey has?
I agree it’s not wrong. I’m claiming it’s not a useful framing. If we must use this framing, I think humans and evolution are not remotely comparable on how good they are at long-term optimization, and I can’t understand why you think they are. (Humans may not be good at long-term optimization on some absolute scale, but they’re a hell of a lot better than evolution.)
I think in my example you could make a similar argument: looking at outcomes, you could say “Rohin is always optimizing for learning abstract algebra, and he has now become very good at abstract algebra.” It’s not wrong, it’s just not useful for predicting my future behavior, and doesn’t seem to carve reality at its joints.
(Tbc, I think this example is overstating the case, “evolution is always optimizing for fitness” is definitely more reasonable and more predictive than “Rohin is always optimizing for learning abstract algebra”.)
I really do think that the best thing is to just strip away agency, and talk about selection:
Re: usefulness:
Suppose a specific monkey has some mutation and gets a little bit of proto-culture. Are you claiming that this will increase the number of children that monkey has?