I felt too stupid when it comes to biology to interact with the original superbabies post but this speaks more my language (data science) so I would also just want to bring up a point I had with the original post that I’m still confused about related to what you’ve mentioned here.
The idea I’ve heard about this is that intelligence has been under strong selective pressure for millions of years, which should apriori make us believe that IQ is a significant challenge for genetic enhancement. As Kevin Mitchell explains in “Innate,” most remaining genetic variants affecting intelligence are likely:
Slightly deleterious mutations in mutation-selection balance
Variants with fitness tradeoffs preventing fixation
Variants that function only in specific genetic backgrounds
Unlike traits that haven’t been heavily optimized (like resistance to modern diseases), the “low-hanging fruit” for cognitive enhancement has likely already been picked by natural selection. This means that the genetic landscape for intelligence might not be a simple upward slope waiting to be climbed, but a complex terrain where most interventions may disrupt finely-tuned systems.
When we combine multiple supposedly beneficial variants, we risk creating novel interactions that disrupt the intricate balance of neural development that supports intelligence. The evolutionary “valleys” for cognitive traits may be deeper precisely because selection has already pushed us toward local optima.
This doesn’t make enhancement impossible, but suggests the challenge may be greater than statistical models indicate, and might require understanding developmental pathways at a deeper level than just identifying associated variants.
Also if we look at things like horizontal gene transfer & shifting balance theory we can see these as general ways to discover hidden genetic variants in optimisation and this just feels highly non-trivial to me? Like competing against evolution for optimal information encoding just seems really difficult apriori? (Not a geneticist so I might be completely wrong here!)
I’m very happy to be convinced that these arguments are wrong and I would love to hear why!
One data point that’s highly relevant to this conversation is that, at least in Europe, intelligence has undergone quite significant selection in just the last 9000 years. As measured in a modern environment, average IQ went from ~70 to ~100 over that time period (the Y axis here is standard deviations on a polygenic score for IQ)
I don’t have time to read the book “Innate”, so please let me know if there are compelling arguments I am missing, but based on what I know the “IQ-increasing variants have been exhausted” hypothesis seems pretty unlikely to be true.
There’s well over a thousand IQ points worth of variants in the human gene pool, which is not what you would expect to see if nature had exhaustively selected for all IQ increasing variants.
Unlike traits that haven’t been heavily optimized (like resistance to modern diseases)
Wait, resistance to modern diseases is actually the single most heavily selected for thing in the last ten thousand years. There is very strong evidence of recent selection for immune system function in humans, particularly in the period following domestication of animals.
Like there has been so much selection for human immune function that you literally see higher read errors in genetic sequencing readouts in regions like the major histocompatibility complex (there’s literally that much diversity!)
but suggests the challenge may be greater than statistical models indicate, and might require understanding developmental pathways at a deeper level than just identifying associated variants.
If I have one takeaway from the last ten years of deep learning, it’s that you don’t have to have a mechanistic understanding of how your model is solving a problem to be able to improve performance. This notion that you need a deep mechanical understanding of how genetic circuits operate or something is just not true.
What you actually need to do genetic engineering is a giant dataset and a means of editing.
Statistical methods like finemapping and adjusting for population level linkage disequilibrium help, but they’re just making your gene editing more efficient by doing a better job of identifying causal variants. They don’t take it from “not working” to “working”.
Also if we look at things like horizontal gene transfer & shifting balance theory we can see these as general ways to discover hidden genetic variants in optimisation and this just feels highly non-trivial to me? Like competing against evolution for optimal information encoding just seems really difficult apriori? (Not a geneticist so I might be completely wrong here!)
Horizontal gene transfer doesn’t happen in humans. That’s mostly something bacteria do.
There IS weird stuff in humans like viral DNA getting incorporated into the genome, (I’ve seen estimates that about 10% of the human genome is composed of this stuff!) but this isn’t particularly common and the viruses often accrue mutations over time that prevents them from activating or doing anything besides just acting like junk DNA.
Occasionally these viral genes become useful and get selected on (I think the most famous example of this is some ancient viral genes that play a role in placental development), but this is just a weird quirk of our history. It’s not like we’re prevented from figuring out the role of these genes in future outcomes just because they came from bacteria.
IIUC human intelligence is not in evolutionary equilibrium; it’s been increasing pretty rapidly (by the standards of biological evolution) over the course of humanity’s development, right up to “recent” evolutionary history. So difficulty-of-improving-on-a-system-already-optimized-by-evolution isn’t that big of a barrier here, and we should expect to see plenty of beneficial variants which have not yet reached fixation just by virtue of evolution not having had enough time yet.
(Of course separate from that, there are also the usual loopholes to evolutionary optimality which you listed—e.g. mutation load or variants with tradeoffs in the ancestral environment. But on my current understanding those are a minority of the available gains from human genetic intelligence enhancement.)
While cultural intelligence has indeed evolved rapidly, the genetic architecture supporting it operates through complex stochastic development and co-evolutionary dynamics that simple statistical models miss. The most promising genetic enhancements likely target meta-parameters governing learning capabilities rather than direct IQ-associated variants.
Longer:
You make a good point about human intelligence potentially being out of evolutionary equilibrium. The rapid advancement of human capabilities certainly suggests beneficial genetic variants might still be working their way through the population.
I’d also suggest this creates an even more interesting picture when combined with developmental stochasticity—the inherent randomness in how neural systems form even with identical genetic inputs (see other comment response to Yair for more detail). This stochasticity means genetic variants don’t deterministically produce intelligence outcomes but rather influence probabilistic developmental processes.
What complicates the picture further is that intelligence emerges through co-evolution between our genes and our cultural tools. Following Heyes’ cognitive gadgets theory, genetic factors don’t directly produce intelligence but rather interact with cultural infrastructure to shape learning processes. This suggests the most valuable genetic variants might not directly enhance raw processing power but instead improve how effectively our brains interface with cultural tools—essentially helping our brains better leverage the extraordinary cultural inheritance (language among other things) we already possess.
Rather than simply accumulating variants statistically associated with IQ, effective enhancement might target meta-parameters governing learning capabilities—the mechanisms that allow our brains to adapt to and leverage our rapidly evolving cultural environment. This isn’t an argument against genetic enhancement, but for more sophisticated approaches that respect how intelligence actually emerges.
(Workshopped this with my different AI tools a bit and I now have a paper outline saved on this if you want more of the specific modelling frame lol)
IIUC human intelligence is not in evolutionary equilibrium; it’s been increasing pretty rapidly (by the standards of biological evolution) over the course of humanity’s development, right up to “recent” evolutionary history.
Why do you believe that? Do we have data that mutations that are associated with higher IQ are more prevalent today than 5,000 years ago?
The best and most recent (last year) evidence based on comparing ancient and modern genomes seems to suggest intelligence was selected very strongly during agricultural revolution (a full SD) and has changed <0.2SD since AD0 [for the populations studied]
It seems that the evolutionary pressure for intelligence wasnt that strong in the last few thousand years compared to selection on many other traits (health and sexual selected traits seem to dominate).
Edit: it would take some effort to dig up this study. Ping me if this is of interest to you.
The evidence I have mentally cached is brain size. The evolutionary trajectory of brain size is relatively easy to measure just by looking at skulls from archaeological sites, and IIRC it has increased steadily through human evolutionary history and does not seem to be in evolutionary equilibrium.
(Also on priors, even before any evidence, we should strongly expect humans to not be in evolutionary equilibrium. As the saying goes, “humans are the stupidest thing which could take off, otherwise we would have taken off sooner”. I.e. since the timescale of our takeoff is much faster than evolution, the only way we could be at equilibrium is if a maximal-under-constraints intelligence level just happened to be exactly enough for humans to take off.)
There’s probably other kinds of evidence as well; this isn’t a topic I’ve studied much.
If humans are the stupidest thing which could take off, and human civilization arose the moment we became smart enough to build it, there is one set of observations which bothers me:
The Bering land bridge sank around 11,000 BCE, cutting off the Americas from Afroeurasia until the last thousand years.
Around 10,000 BCE, people in the Fertile Crescent started growing wheat, barley, and lentils.
Around 9,000-7,000 BCE, people in Central Mexico started growing corn, beans, and squash.
10-13 separate human groups developed farming on their own with no contact between them. The Sahel region is a clear example, cut off from Eurasia by the Sahara Desert.
Humans had lived in the Fertile Crescent for 40,000-50,000 years before farming started.
Here’s the key point: humans lived all over the world for tens of thousands of years doing basically the same hunter-gatherer thing. Then suddenly, within just a few thousand years starting around 12,000 years ago, many separate groups all invented farming.
I don’t find it plausible that this happened because humans everywhere suddenly evolved to be smarter at the same time across 10+ isolated populations. That’s not how advantageous genetic traits tend to emerge, and if it was the case here, there are some specific bits of genetic evidence I’d expect to see (and I don’t see them).
I like Bellwood’s hypothesis better: a global climate trigger made farming possible in multiple regions at roughly the same time. When the last ice age ended, the climate stabilized, creating reliable growing seasons that allowed early farming to succeed.
If farming is needed for civilization, and farming happened because of climate changes rather than humans reaching some intelligence threshold, I don’t think the “stupidest possible takeoff” hypothesis looks as plausible. Humans had the brains to invent farming long before they actually did it, and it seems unlikely that the evolutionary arms race that made us smarter stopped at exactly the point we became smart enough to develop agriculture and humans just stagnated while waiting for a better climate to take off.
I do agree that the end of the last glacial period was the obvious immediate trigger for agriculture. But the “humans are the stupidest thing which could take off model” still holds, because evolution largely operates on a slower timescale than the glacial cycle.
Specifics: the last glacial period ran from roughly 115k years ago to 12k years ago. Whereas, if you look at a timeline of human evolution, most of the evolution from apes to humans happens on a timescale of 100k − 10M years. So it’s really only the very last little bit where an ice age was blocking takeoff. In particular, if human intelligence has been at evolutionary equilibrium for some time, then we should wonder why humanity didn’t take off 115k years ago, before the last ice age.
In particular, if human intelligence has been at evolutionary equilibrium for some time, then we should wonder why humanity didn’t take off 115k years ago, before the last ice age.
Yes we should wonder that. Specifically, we note
Humans and chimpanzees split about 7M years ago
The transition from archaic to anatomically modern humans was about 200k years ago
Humans didn’t substantially develop agriculture before the last ice age started 115k years ago (we’d expect to see archaeological evidence in the form of e.g. agricultural tools which we don’t see, while we do see stuff like stone axes)
Multiple isolated human populations independently developed agriculture starting about 12k years ago
From this we can conclude that either:
Pre-ice-age humans were on the cusp of being able to develop agriculture, and an extra 100k years of gradual evolution was sufficient to bump them over the relevant threshold
There was some notable period between 115k and 12k years ago where the rate of selective pressure on humans substantially strengthened or changed direction for some reason. Which might correspond to a very tight population bottleneck:
In the modern era, the fertility-IQ correlation seems unclear; in some contexts, higher fertility seems to be linked with lower IQ, in other contexts with higher IQ. I have no idea of what it was like in the hunter-gatherer era, but it doesn’t feel like an obviously impossible notion that very high IQs might have had a negative effect on fertility in that time as well.
E.g. because the geniuses tended to get bored with repeatedly doing routine tasks and there wasn’t enough specialization to offload that to others, thus leading to the geniuses having lower status. Plus having an IQ that’s sufficiently higher than that of others can make it hard to relate to them and get along socially, and back then there wouldn’t have been any high-IQ societies like a university or lesswrong.com to find like-minded peers at.
If you have a mutation that gives you +10 IQ that doesn’t make it hard for you to relate with your fellow tribe of hunter-gatherers.
There´s a lot more inbreeding in hunter-gatherer tribes that results in mutations being distributed in the tribe than there is in modern Western society.
The key question is whether you get more IQ if you add IQ-increasing mutations from different tribes together, I don’t think that it being disadvantageous to have +30 IQ more than fellow tribe members would be a reason why IQ-increasing mutations that are additive should not exist.
There’s plenty of people with an IQ of 140, and plenty of people with an IQ of 60, and it seems most of this variation is genetic, which suggests that there are low hanging fruit available somewhere (though poss bly not in single point mutations).
Also when a couple with low and high IQs respectively have children, the children tend to be normal, and distributed across the full range of IQs. This suggests that there’s not some set of critical mutations that only work if the other mutations are present as well, and the effect is more lots of smaller things that are additive.
The book Innate actually goes into detail about a bunch of IQ studies and relating it to neuroscience which is why I really liked reading it!
and it seems most of this variation is genetic
This to me seems like the crux here, in the book innate he states the belief that around 60% of it is genetic and 20% is developmental randomness (since brain development is essentially a stochastic process), 20% being nurture based on twin studies.
I do find this a difficult thing to think about though since intelligence can be seen as the speed of the larger highways and how well (differentially) coupled different cortical areas are. There are deep foundational reasons to believe that our cognition is concepts stacked on top of other concepts such as described in the Active Inference literature. A more accessible and practical way of seeing this is in the book How Emotions Are Made by Lisa Feldman Barett.
Also if you combine this with studies done by Robert Sapolvsky described in the book Why Zebra’s Don’t Get Ulcers where traumatic events in childhood leads to less IQ down the line we can see how wrong beliefs that stick lead to your stochastic process of development worsening. This is because at timestep T-1 you had a belief or experience that shaped your learning to be way off and at timestep T you’re using this to learn. Yes the parameters are set genetically yet from a mechanistic perspective it very much interfaces with your learning.
Twin studies also have a general bias in that they’re often made in societies affected by globalisation and that have been connected for a long time. If you believe something like cultural evolution or cognitive gadgets theory what is seen as genetically influenced might actually be genetically influenced given that the society you’re in share the same cognitive gadgets. (This is essentially one of the main critiques of twin studies)
So there’s some degree that (IQ|Cogntiive Gadgets) could be decomposed genetically but if you don’t decompose it given cultural tools it doesn’t make sense? There’s no fully general intelligence, there’s an intelligence that given the right infrastructure then becomes general?
I felt too stupid when it comes to biology to interact with the original superbabies post but this speaks more my language (data science) so I would also just want to bring up a point I had with the original post that I’m still confused about related to what you’ve mentioned here.
The idea I’ve heard about this is that intelligence has been under strong selective pressure for millions of years, which should apriori make us believe that IQ is a significant challenge for genetic enhancement. As Kevin Mitchell explains in “Innate,” most remaining genetic variants affecting intelligence are likely:
Slightly deleterious mutations in mutation-selection balance
Variants with fitness tradeoffs preventing fixation
Variants that function only in specific genetic backgrounds
Unlike traits that haven’t been heavily optimized (like resistance to modern diseases), the “low-hanging fruit” for cognitive enhancement has likely already been picked by natural selection. This means that the genetic landscape for intelligence might not be a simple upward slope waiting to be climbed, but a complex terrain where most interventions may disrupt finely-tuned systems.
When we combine multiple supposedly beneficial variants, we risk creating novel interactions that disrupt the intricate balance of neural development that supports intelligence. The evolutionary “valleys” for cognitive traits may be deeper precisely because selection has already pushed us toward local optima.
This doesn’t make enhancement impossible, but suggests the challenge may be greater than statistical models indicate, and might require understanding developmental pathways at a deeper level than just identifying associated variants.
Also if we look at things like horizontal gene transfer & shifting balance theory we can see these as general ways to discover hidden genetic variants in optimisation and this just feels highly non-trivial to me? Like competing against evolution for optimal information encoding just seems really difficult apriori? (Not a geneticist so I might be completely wrong here!)
I’m very happy to be convinced that these arguments are wrong and I would love to hear why!
One data point that’s highly relevant to this conversation is that, at least in Europe, intelligence has undergone quite significant selection in just the last 9000 years. As measured in a modern environment, average IQ went from ~70 to ~100 over that time period (the Y axis here is standard deviations on a polygenic score for IQ)
The above graph is from David Reich’s paper
I don’t have time to read the book “Innate”, so please let me know if there are compelling arguments I am missing, but based on what I know the “IQ-increasing variants have been exhausted” hypothesis seems pretty unlikely to be true.
There’s well over a thousand IQ points worth of variants in the human gene pool, which is not what you would expect to see if nature had exhaustively selected for all IQ increasing variants.
Wait, resistance to modern diseases is actually the single most heavily selected for thing in the last ten thousand years. There is very strong evidence of recent selection for immune system function in humans, particularly in the period following domestication of animals.
Like there has been so much selection for human immune function that you literally see higher read errors in genetic sequencing readouts in regions like the major histocompatibility complex (there’s literally that much diversity!)
If I have one takeaway from the last ten years of deep learning, it’s that you don’t have to have a mechanistic understanding of how your model is solving a problem to be able to improve performance. This notion that you need a deep mechanical understanding of how genetic circuits operate or something is just not true.
What you actually need to do genetic engineering is a giant dataset and a means of editing.
Statistical methods like finemapping and adjusting for population level linkage disequilibrium help, but they’re just making your gene editing more efficient by doing a better job of identifying causal variants. They don’t take it from “not working” to “working”.
Horizontal gene transfer doesn’t happen in humans. That’s mostly something bacteria do.
There IS weird stuff in humans like viral DNA getting incorporated into the genome, (I’ve seen estimates that about 10% of the human genome is composed of this stuff!) but this isn’t particularly common and the viruses often accrue mutations over time that prevents them from activating or doing anything besides just acting like junk DNA.
Occasionally these viral genes become useful and get selected on (I think the most famous example of this is some ancient viral genes that play a role in placental development), but this is just a weird quirk of our history. It’s not like we’re prevented from figuring out the role of these genes in future outcomes just because they came from bacteria.
IIUC human intelligence is not in evolutionary equilibrium; it’s been increasing pretty rapidly (by the standards of biological evolution) over the course of humanity’s development, right up to “recent” evolutionary history. So difficulty-of-improving-on-a-system-already-optimized-by-evolution isn’t that big of a barrier here, and we should expect to see plenty of beneficial variants which have not yet reached fixation just by virtue of evolution not having had enough time yet.
(Of course separate from that, there are also the usual loopholes to evolutionary optimality which you listed—e.g. mutation load or variants with tradeoffs in the ancestral environment. But on my current understanding those are a minority of the available gains from human genetic intelligence enhancement.)
TL;DR:
While cultural intelligence has indeed evolved rapidly, the genetic architecture supporting it operates through complex stochastic development and co-evolutionary dynamics that simple statistical models miss. The most promising genetic enhancements likely target meta-parameters governing learning capabilities rather than direct IQ-associated variants.
Longer:
You make a good point about human intelligence potentially being out of evolutionary equilibrium. The rapid advancement of human capabilities certainly suggests beneficial genetic variants might still be working their way through the population.
I’d also suggest this creates an even more interesting picture when combined with developmental stochasticity—the inherent randomness in how neural systems form even with identical genetic inputs (see other comment response to Yair for more detail). This stochasticity means genetic variants don’t deterministically produce intelligence outcomes but rather influence probabilistic developmental processes.
What complicates the picture further is that intelligence emerges through co-evolution between our genes and our cultural tools. Following Heyes’ cognitive gadgets theory, genetic factors don’t directly produce intelligence but rather interact with cultural infrastructure to shape learning processes. This suggests the most valuable genetic variants might not directly enhance raw processing power but instead improve how effectively our brains interface with cultural tools—essentially helping our brains better leverage the extraordinary cultural inheritance (language among other things) we already possess.
Rather than simply accumulating variants statistically associated with IQ, effective enhancement might target meta-parameters governing learning capabilities—the mechanisms that allow our brains to adapt to and leverage our rapidly evolving cultural environment. This isn’t an argument against genetic enhancement, but for more sophisticated approaches that respect how intelligence actually emerges.
(Workshopped this with my different AI tools a bit and I now have a paper outline saved on this if you want more of the specific modelling frame lol)
Why do you believe that? Do we have data that mutations that are associated with higher IQ are more prevalent today than 5,000 years ago?
The best and most recent (last year) evidence based on comparing ancient and modern genomes seems to suggest intelligence was selected very strongly during agricultural revolution (a full SD) and has changed <0.2SD since AD0 [for the populations studied]
It seems that the evolutionary pressure for intelligence wasnt that strong in the last few thousand years compared to selection on many other traits (health and sexual selected traits seem to dominate).
Edit: it would take some effort to dig up this study. Ping me if this is of interest to you.
The evidence I have mentally cached is brain size. The evolutionary trajectory of brain size is relatively easy to measure just by looking at skulls from archaeological sites, and IIRC it has increased steadily through human evolutionary history and does not seem to be in evolutionary equilibrium.
(Also on priors, even before any evidence, we should strongly expect humans to not be in evolutionary equilibrium. As the saying goes, “humans are the stupidest thing which could take off, otherwise we would have taken off sooner”. I.e. since the timescale of our takeoff is much faster than evolution, the only way we could be at equilibrium is if a maximal-under-constraints intelligence level just happened to be exactly enough for humans to take off.)
There’s probably other kinds of evidence as well; this isn’t a topic I’ve studied much.
If humans are the stupidest thing which could take off, and human civilization arose the moment we became smart enough to build it, there is one set of observations which bothers me:
The Bering land bridge sank around 11,000 BCE, cutting off the Americas from Afroeurasia until the last thousand years.
Around 10,000 BCE, people in the Fertile Crescent started growing wheat, barley, and lentils.
Around 9,000-7,000 BCE, people in Central Mexico started growing corn, beans, and squash.
10-13 separate human groups developed farming on their own with no contact between them. The Sahel region is a clear example, cut off from Eurasia by the Sahara Desert.
Humans had lived in the Fertile Crescent for 40,000-50,000 years before farming started.
Here’s the key point: humans lived all over the world for tens of thousands of years doing basically the same hunter-gatherer thing. Then suddenly, within just a few thousand years starting around 12,000 years ago, many separate groups all invented farming.
I don’t find it plausible that this happened because humans everywhere suddenly evolved to be smarter at the same time across 10+ isolated populations. That’s not how advantageous genetic traits tend to emerge, and if it was the case here, there are some specific bits of genetic evidence I’d expect to see (and I don’t see them).
I like Bellwood’s hypothesis better: a global climate trigger made farming possible in multiple regions at roughly the same time. When the last ice age ended, the climate stabilized, creating reliable growing seasons that allowed early farming to succeed.
If farming is needed for civilization, and farming happened because of climate changes rather than humans reaching some intelligence threshold, I don’t think the “stupidest possible takeoff” hypothesis looks as plausible. Humans had the brains to invent farming long before they actually did it, and it seems unlikely that the evolutionary arms race that made us smarter stopped at exactly the point we became smart enough to develop agriculture and humans just stagnated while waiting for a better climate to take off.
I do agree that the end of the last glacial period was the obvious immediate trigger for agriculture. But the “humans are the stupidest thing which could take off model” still holds, because evolution largely operates on a slower timescale than the glacial cycle.
Specifics: the last glacial period ran from roughly 115k years ago to 12k years ago. Whereas, if you look at a timeline of human evolution, most of the evolution from apes to humans happens on a timescale of 100k − 10M years. So it’s really only the very last little bit where an ice age was blocking takeoff. In particular, if human intelligence has been at evolutionary equilibrium for some time, then we should wonder why humanity didn’t take off 115k years ago, before the last ice age.
Yes we should wonder that. Specifically, we note
Humans and chimpanzees split about 7M years ago
The transition from archaic to anatomically modern humans was about 200k years ago
Humans didn’t substantially develop agriculture before the last ice age started 115k years ago (we’d expect to see archaeological evidence in the form of e.g. agricultural tools which we don’t see, while we do see stuff like stone axes)
Multiple isolated human populations independently developed agriculture starting about 12k years ago
From this we can conclude that either:
Pre-ice-age humans were on the cusp of being able to develop agriculture, and an extra 100k years of gradual evolution was sufficient to bump them over the relevant threshold
There was some notable period between 115k and 12k years ago where the rate of selective pressure on humans substantially strengthened or changed direction for some reason. Which might correspond to a very tight population bottleneck:
source: Robust and scalable inference of population history from hundreds of unphased whole-genomes
Note that “bigger brains” might also not have been the adaptation that enabled agriculture.
In the modern era, the fertility-IQ correlation seems unclear; in some contexts, higher fertility seems to be linked with lower IQ, in other contexts with higher IQ. I have no idea of what it was like in the hunter-gatherer era, but it doesn’t feel like an obviously impossible notion that very high IQs might have had a negative effect on fertility in that time as well.
E.g. because the geniuses tended to get bored with repeatedly doing routine tasks and there wasn’t enough specialization to offload that to others, thus leading to the geniuses having lower status. Plus having an IQ that’s sufficiently higher than that of others can make it hard to relate to them and get along socially, and back then there wouldn’t have been any high-IQ societies like a university or lesswrong.com to find like-minded peers at.
or that some IQ-increasing variants affect stuff other than intelligence in ways that are disadvantageous/fitness-decreasing in some contexts
If you have a mutation that gives you +10 IQ that doesn’t make it hard for you to relate with your fellow tribe of hunter-gatherers.
There´s a lot more inbreeding in hunter-gatherer tribes that results in mutations being distributed in the tribe than there is in modern Western society.
The key question is whether you get more IQ if you add IQ-increasing mutations from different tribes together, I don’t think that it being disadvantageous to have +30 IQ more than fellow tribe members would be a reason why IQ-increasing mutations that are additive should not exist.
There’s plenty of people with an IQ of 140, and plenty of people with an IQ of 60, and it seems most of this variation is genetic, which suggests that there are low hanging fruit available somewhere (though poss bly not in single point mutations).
Also when a couple with low and high IQs respectively have children, the children tend to be normal, and distributed across the full range of IQs. This suggests that there’s not some set of critical mutations that only work if the other mutations are present as well, and the effect is more lots of smaller things that are additive.
The book Innate actually goes into detail about a bunch of IQ studies and relating it to neuroscience which is why I really liked reading it!
This to me seems like the crux here, in the book innate he states the belief that around 60% of it is genetic and 20% is developmental randomness (since brain development is essentially a stochastic process), 20% being nurture based on twin studies.
I do find this a difficult thing to think about though since intelligence can be seen as the speed of the larger highways and how well (differentially) coupled different cortical areas are. There are deep foundational reasons to believe that our cognition is concepts stacked on top of other concepts such as described in the Active Inference literature. A more accessible and practical way of seeing this is in the book How Emotions Are Made by Lisa Feldman Barett.
Also if you combine this with studies done by Robert Sapolvsky described in the book Why Zebra’s Don’t Get Ulcers where traumatic events in childhood leads to less IQ down the line we can see how wrong beliefs that stick lead to your stochastic process of development worsening. This is because at timestep T-1 you had a belief or experience that shaped your learning to be way off and at timestep T you’re using this to learn. Yes the parameters are set genetically yet from a mechanistic perspective it very much interfaces with your learning.
Twin studies also have a general bias in that they’re often made in societies affected by globalisation and that have been connected for a long time. If you believe something like cultural evolution or cognitive gadgets theory what is seen as genetically influenced might actually be genetically influenced given that the society you’re in share the same cognitive gadgets. (This is essentially one of the main critiques of twin studies)
So there’s some degree that (IQ|Cogntiive Gadgets) could be decomposed genetically but if you don’t decompose it given cultural tools it doesn’t make sense? There’s no fully general intelligence, there’s an intelligence that given the right infrastructure then becomes general?