Human genetic engineering targetting IQ as proposed by GeneSmith is likely to lead to an arms race between competing individuals and groups (such as nation states).
- Arms races can destabilise existing power balances such as nuclear MAD
- Which traits people choose to genetically engineer in offspring may depend on what’s good for winning the race rather than what’s long-term optimal in any sense.
- If maintaining lead time against your opponent matters, there are incentives to bribe, persuade or even coerce people to bring genetically edited offspring to term.
- It may (or may not) be possible to engineer traits that are politically important, such as superhuman ability to tell lies, superhuman ability to detect lies, superhuman ability to persuade others, superhuman ability to detect others true intentions, etc.
- It may (or may not) be possible to engineer cognitive enhancements adjacent to IQ such as working memory, executive function, curiosity, truth-seeking, ability to experience love or trust, etc.
- It may (or may not) be possible engineer cognitive traits that have implications on which political values you will find appealing. For instance affective empathy, respect for authority, introversion versus extroversion, inclination towards people versus inclination towards things, etc.
I’m spitballing here, I haven’t yet studied genomic literature on which of these we know versus don’t know the edits for. But also, we might end up investing money (trillions of dollars?) to find edits we don’t know about today.
Has anyone written about this?
I know people such as Robin Hanson have written about arms races between digital minds. Automated R&D using AI is already likely to be used in an arms race manner.
I haven’t seen as much writing on arms races between genetically edited human brains though. Hence I’m asking.
Superhumans that are actually better than you at making money will eventually be obvious. Yes, there may be some lead time obtainable before everyone understands, but I expect it will only be a few years at maximum.
Standard objection: Genetic engineering takes a lot of time till it has any effect. A baby doesn’t develop into an adult over night. So it will almost certainly not matter relative to the rapid pace of AI development.
I agree my point is less important if we get ASI by 2030, compared to if we don’t get ASI.
That being said, the arms race can develop over the timespan of years not decades. 6-year superhumans will prompt people to create the next generation of superhumans, and within 10-15 years we will have children from multiple generations where the younger generation have edits with stronger effect sizes. Once we can see the effects on these multiple generations, people might go at max pace.
Popularising human genetic engineering is also by default going to popularise lots of neighbouring ideas, not just the idea itself. If you are attracting attention to this idea, it may be useful for you to be aware of this.
The example of this that has already played out is popularising “ASI is dangerous” also popularises “ASI is powerful hence we should build it”.
P.S. Also we don’t know the end state of this race. +5 SD humans aren’t necessarily the peak, it’s possible these humans further do research on more edits.
This is unlikely to be careful controlled experiment and is more likely to be nation states moving at maximum pace to produce more babies so that they control more of the world when a new equilibrium is reached. And we don’t know when if ever this equilibrium will be hit.
Human genetic engineering targetting IQ as proposed by GeneSmith is likely to lead to an arms race between competing individuals and groups (such as nation states).
- Arms races can destabilise existing power balances such as nuclear MAD
- Which traits people choose to genetically engineer in offspring may depend on what’s good for winning the race rather than what’s long-term optimal in any sense.
- If maintaining lead time against your opponent matters, there are incentives to bribe, persuade or even coerce people to bring genetically edited offspring to term.
- It may (or may not) be possible to engineer traits that are politically important, such as superhuman ability to tell lies, superhuman ability to detect lies, superhuman ability to persuade others, superhuman ability to detect others true intentions, etc.
- It may (or may not) be possible to engineer cognitive enhancements adjacent to IQ such as working memory, executive function, curiosity, truth-seeking, ability to experience love or trust, etc.
- It may (or may not) be possible engineer cognitive traits that have implications on which political values you will find appealing. For instance affective empathy, respect for authority, introversion versus extroversion, inclination towards people versus inclination towards things, etc.
I’m spitballing here, I haven’t yet studied genomic literature on which of these we know versus don’t know the edits for. But also, we might end up investing money (trillions of dollars?) to find edits we don’t know about today.
Has anyone written about this?
I know people such as Robin Hanson have written about arms races between digital minds. Automated R&D using AI is already likely to be used in an arms race manner.
I haven’t seen as much writing on arms races between genetically edited human brains though. Hence I’m asking.
If you convince your enemies that IQ is a myth, they won’t be concerned about your genetically engineered high IQ babies.
Superhumans that are actually better than you at making money will eventually be obvious. Yes, there may be some lead time obtainable before everyone understands, but I expect it will only be a few years at maximum.
Standard objection: Genetic engineering takes a lot of time till it has any effect. A baby doesn’t develop into an adult over night. So it will almost certainly not matter relative to the rapid pace of AI development.
I agree my point is less important if we get ASI by 2030, compared to if we don’t get ASI.
That being said, the arms race can develop over the timespan of years not decades. 6-year superhumans will prompt people to create the next generation of superhumans, and within 10-15 years we will have children from multiple generations where the younger generation have edits with stronger effect sizes. Once we can see the effects on these multiple generations, people might go at max pace.
PSA
Popularising human genetic engineering is also by default going to popularise lots of neighbouring ideas, not just the idea itself. If you are attracting attention to this idea, it may be useful for you to be aware of this.
The example of this that has already played out is popularising “ASI is dangerous” also popularises “ASI is powerful hence we should build it”.
P.S. Also we don’t know the end state of this race. +5 SD humans aren’t necessarily the peak, it’s possible these humans further do research on more edits.
This is unlikely to be careful controlled experiment and is more likely to be nation states moving at maximum pace to produce more babies so that they control more of the world when a new equilibrium is reached. And we don’t know when if ever this equilibrium will be hit.