Although I enjoyed thinking about this post, I don’t currently trust the reasoning in it, and decided not to update off it, for reasons I summarize as:
You are trying to compare two different levels of abstractions and the “natural” models that come out of them. I think you did not make good arguments why the more detailed-oriented would generalize better than the more abstract one. I think your argumentation boils down to an implicit notion that seeing more detail is better, which is true as information, but does not mean the shape of the concepts expressing such details is a better model (overfitting).
Your reasoning in certain places lacks quantification where the argument rests on it.
More detailed comments, in order of appearance in the post:
Observing this selection process, we can calculate the IGF of traits currently under selection, as a measure of how strongly those are being selected. But evolution is not optimizing for this measure; evolution is optimizing for the traits that have currently been chosen for optimization.
If IGF is how many new copies of the gene pop up in the next generation, can’t I say at the same time that increasing IGF is a good general level description of what’s going on, and at the same time look at the details? Why “but?” Maybe I’m picking words though. Also the second sentence confuses me, although after reading the post I think I understand what you mean.
Rather, they cautioned against thinking of evolution as an active agent that “does” anything in the first place.
I expect this sentence in the textbook to be meant as advice against antropomorphizing, putting oneself into the shoes of evolution and using one’s own instinct and judgement to see what to do. I think it is possible to analyze evolution as an agent, if one is careful to employ abstract reasoning, without ascribing “goodness” or “justice” or whatever to the agent without realizing.
If we were modeling evolution as a mathematical function, we could say that it was first selecting for light coloration in moths, then changed to select for dark, then changed to select for light again.
To me this looks poor if considered a model; the previous paragraph shows that you understand how the thing goes as the environment changing which genes get selected, which still means that what’s going on is increasing IGF; you would predict other similar scenarios by thinking that such and such environmental factor makes such and such genes be the ones increasing their presence. Looking at which genes were selected at which point because of what precisely then doesn’t give you automatically a better model than thinking in terms of IGF + external knowledge about the laws of reality.
This leads to the trees becoming more common than the bushes. But since trees need to spend much more energy on producing and maintaining their trunk, they don’t have as much energy to spend on growing fruit. When trees were rare and mostly stealing energy from the bushes, this wasn’t as much of a problem; but once the whole population consists of trees, they can end up shading each other. At this point, they end up producing much less fruit from which new trees could grow, so have fewer offspring and thus a lower mean fitness.
This story is still compatible with the description that, at each point, evolution is following IGF locally, though not globally. I think this checks out with “reward is not the utility function” and such, and also with “selecting for IGF does not produce IGF-maximizing brains”. Though all this also makes me suspect that I could be misunderstanding too many things at once.
Effective contraception is a relatively recent innovation. Even hunter-gatherers have access to effective “contraception” in the form of infanticide, which is commonly practiced among some modern hunter-gatherer societies.
The first “effective” here is bigger than the second. See gwern’s comment.
Particularly sensitive readers may want to skip the following paragraphs from The Anthropology of Childhood°:
I expect this examples to be cherry-picked. I do not expect ancient society to intentionally kill on average 54⁄141 = 1 out of 3 kids. I’m volatile due to my ignorance in the matter though.
Also, even though the share of voluntarily childfree people is increasing, it’s still not the predominant choice. One 2022 study found that 22% of the people polled neither had nor wanted to have children—which is a significant amount, but still leaves 78% of people as ones who either have or want to have children. There’s still a strong drive to have children that’s separate from the drive to just have sex.
This is getting distracted by a subset of details when you can look at fertility rates, and possibly at how they relate to wealth, and then at the trajectory of the world. I have the impression there’s scientific consensus that fertility will probably go down even in countries which currently have a high one as they get richer.
It’s a novel cultural development that we prioritize things other-than-having-children so much. Anthropology of Childhood spends significant time examining the various factors that affect the treatment of children in various cultures. It quite strongly argues that the value of children has always also been strongly contingent on various cultural and economic factors—meaning that it has always been just one of the things that people care about. (In fact, a desire to have lots of children may be more tied to agricultural and industrial societies, where the economic incentives for it are abnormally high.)
How much is “so much”? Why is not much enough for you?
To me, the simplest story here looks something like “evolution selects humans for having various desires, from having sex to having children to creating art and lots of other things too; and all of these desires are then subject to complex learning and weighting processes that may emphasize some over others, depending on the culture and environment”.
I understand this “looks like the story”, but not “simplest”, in the context of taking this as model, which I think is the subtext.
But it doesn’t look to me like evolution selected us to desire one thing, and then we developed an inner optimizer that ended up doing something completely different. Rather, it looks like we were selected to desire many different things, with a very complicated function choosing which things in that set of doings each individual ends up emphasizing. Today’s culture might have shifted that function to weigh our desires in a different manner than before, but everything that we do is still being selected from within that set of basic desires, with the weighting function operating the same as it always has.
I agree with the first sentence: evolution selected on IGF, not on desiring IGF. “Selecting on IGF” is itself an abstraction of what’s going on, which with humans involved some specific details we know or guess about. In particular, a brain was coughed up that ends up, compared to its clearly visible general abilities, not optimizing IGF as main goal. If you decide not to consider the description of what happened as “selecting on IGF”, that’s a question of how well that works as a concept to make predictive models.
So I think I mostly literally agree with this paragraph, but not with the avversative: it’s not an argument against the subject under debate.
Although I enjoyed thinking about this post, I don’t currently trust the reasoning in it, and decided not to update off it, for reasons I summarize as:
You are trying to compare two different levels of abstractions and the “natural” models that come out of them. I think you did not make good arguments why the more detailed-oriented would generalize better than the more abstract one. I think your argumentation boils down to an implicit notion that seeing more detail is better, which is true as information, but does not mean the shape of the concepts expressing such details is a better model (overfitting).
Your reasoning in certain places lacks quantification where the argument rests on it.
More detailed comments, in order of appearance in the post:
If IGF is how many new copies of the gene pop up in the next generation, can’t I say at the same time that increasing IGF is a good general level description of what’s going on, and at the same time look at the details? Why “but?” Maybe I’m picking words though. Also the second sentence confuses me, although after reading the post I think I understand what you mean.
I expect this sentence in the textbook to be meant as advice against antropomorphizing, putting oneself into the shoes of evolution and using one’s own instinct and judgement to see what to do. I think it is possible to analyze evolution as an agent, if one is careful to employ abstract reasoning, without ascribing “goodness” or “justice” or whatever to the agent without realizing.
To me this looks poor if considered a model; the previous paragraph shows that you understand how the thing goes as the environment changing which genes get selected, which still means that what’s going on is increasing IGF; you would predict other similar scenarios by thinking that such and such environmental factor makes such and such genes be the ones increasing their presence. Looking at which genes were selected at which point because of what precisely then doesn’t give you automatically a better model than thinking in terms of IGF + external knowledge about the laws of reality.
This story is still compatible with the description that, at each point, evolution is following IGF locally, though not globally. I think this checks out with “reward is not the utility function” and such, and also with “selecting for IGF does not produce IGF-maximizing brains”. Though all this also makes me suspect that I could be misunderstanding too many things at once.
The first “effective” here is bigger than the second. See gwern’s comment.
I expect this examples to be cherry-picked. I do not expect ancient society to intentionally kill on average 54⁄141 = 1 out of 3 kids. I’m volatile due to my ignorance in the matter though.
This is getting distracted by a subset of details when you can look at fertility rates, and possibly at how they relate to wealth, and then at the trajectory of the world. I have the impression there’s scientific consensus that fertility will probably go down even in countries which currently have a high one as they get richer.
How much is “so much”? Why is not much enough for you?
I understand this “looks like the story”, but not “simplest”, in the context of taking this as model, which I think is the subtext.
I agree with the first sentence: evolution selected on IGF, not on desiring IGF. “Selecting on IGF” is itself an abstraction of what’s going on, which with humans involved some specific details we know or guess about. In particular, a brain was coughed up that ends up, compared to its clearly visible general abilities, not optimizing IGF as main goal. If you decide not to consider the description of what happened as “selecting on IGF”, that’s a question of how well that works as a concept to make predictive models.
So I think I mostly literally agree with this paragraph, but not with the avversative: it’s not an argument against the subject under debate.