I haven’t made up my mind on whether I endorse human genetic engg, but I have technical doubts:
1. For simple embryonic selection, shouldn’t we consider highest IQ of male embryos rather expected IQ of the embryos?
If I understand correctly, there is a bottleneck on eggs per egg donor, but not as tight a bottleneck on sperm cells per sperm donor. Assume there are 10,000 egg donors with high IQ, 100 eggs per donor mating with 1,000,000 sperm of one sperm donor with high IQ. Out of the 1,000,000 embryos, let’s say the highest IQ embryo grows to childbearing age and is then mated with 100 eggs each of 10k egg donors in the second generation. IQ gain per generation will now be measured as M(10^6)/1.414, which is missing from your table.
2. How do you reason about theoretical IQ that falls outside the range where IQ tests have validity?
Assume my point 1 on simple selection is correct and +3 SD IQ gain per generation is achievable. Assume we start with +3 SD IQ donors. After two generations that’s already a +9 SD IQ adult male. How do you make sense of a number like +9 SD IQ?
3. Is age 18 for sperm and egg donation a legal limit or a biological one?
I’m asking because if it is not a biological limit, I can imagine a govt changing the law for various reasons, for example in order to win an arms race against another country.
When we calculate IQ gain per generation, is this IQ gain per 18 years or IQ gain per 12 years?
Yeah, I don’t know if it makes much sense, and haven’t thought too much about it. A few points:
I don’t know if I actually care too much. Or rather, I think it would be awesome if +9 SD IQ just makes sense somehow, and also we can enable parents to choose that, and also it flows into more generally being insightful. But I think just having tons of people sample from a distribution with a mean of +6 SDs already makes the outlook for humanity significantly better on my view. It’s not remotely the case that every such person will be a great scientist or philosopher; people will have different interests, we shouldn’t be strongly conscripting GE children, and there are likely several other traits quite important for contributing to defending against humanity’s greatest threats (e.g. curiosity, bravery, freethink; attention, determination, drive; wisdom, reflectiveness).
Actually targeting +9 SDs on anything, especially IQ, is potentially quite dangerous and should either be done with great caution after further study, or not at all. See the bullet point “Traits outside the regime of adaptedness”.
But if I speculate:
Some genetic variants will be about “sand in the gears” of the brain. It doesn’t seem crazy to think that you can get a lot of performance by just removing an exceptionally large amount of this. But IDK how much there actually is; kman might have suggested that this isn’t actually much of the genetic variance in IQ.
Some genetic variants will be about “scaling up” (e.g. literally growing a bigger brain, or a more vascularized one, or one with a higher metabolic budget to spend on plasticity, or something like that, IDK). These seem like they plausibly could keep having meaningful effects past the human envelope, but on the other hand could easily hit limits discussed in “Traits outside...”.
Some genetic variants will be about, vaguely, budgeting resources between different neural algorithms. These could easily keep having effects outside the human envelope (e.g., let’s have AN EVEN BIGGER MATH BRAIN CHUNK EVEN THAN EINSTEIN, or what have you). On the other hand, you could very plausibly overshoot, and get some weird or dysfunctional result.
On a technical level, I think more speculating is good before we run the experiment, given that these people if born may very well end up the most powerful people in history. Even small differences in the details could lead to very different futures for humanity.
On a non-technical level, it might be worth writing a post about your stance on the morality and politics of this. So we can separate that whole discussion from the technical discussion.
Even small differences in the details could lead to very different futures for humanity.
I don’t super agree with this. But also, I’d view that as somewhat of a failure. Part of why I want the technology to be wideley available is that I think that decreases the path-dependence. Lots of diverse GE kids means a more universalist future, hopefully.
it might be worth writing a post about your stance on the morality and politics of this.
My view on this is that even when individuals and countries are not under tight “adapt or die” competition constraints such as during wartime or poverty, everyone faces incentive gradients. Free choices aren’t exactly free. For instance I was “free” to not learn software development and pick a lower paying job, but someone from the outside could still predict with high likelihood I was going to learn software anyway.
I take the general point that making this technology partially removes a barrier, where previously human influence on children is limited, and afterward there is at least somewhat more influence. E.g. this could lead to:
One point of comparison is the default. There is a human-evolution that is always happening. Do we like its results? Do we trust it?
Another thing to point out is that the barrier is only somewhat eroded. Except for whole genome synthesis, the amount of control that germline engineering is fairly small compared to the total genetic and phenotypic variation in humans. You and I have 5 or 10 million differing alleles between us; GV would have an effect that’s comparable to, say, 1000s of alleles. (This doesn’t directly make sense for selection, but morally speaking.) In terms of phenotypes, most of the variation would still be in uncontrolled genomic differences and non-genetic differences. Current IQ PGSes explain <20% of the variance in IQ. Now, to some extent I can’t have it both ways; either the benefits of GE are enormous because we’re controlling traits somewhat, or we aren’t controlling traits much and the benefits can’t be that big. But think, for example, of shifting the mean of your kid’s expected IQ, without much shifting the variance. (For some disease traits you can drive the probability of the disease far down, which is a lot of phenotypic control; but that seems not so bad!)
I would still encourage you to forecast what capabilities look like not just as of 2025, but after a trillion dollars of R&D enter this space. Mobilising a trillion dollars for a field of such importance is not difficult, once successful clinical results are out. All your claims about mean and variance, or about whole genome synthesis being possible, will no longer apply afaik.
For simple embryonic selection, shouldn’t we consider highest IQ of male embryos rather expected IQ of the embryos?
There’s something really off about the frame of your question. I’m not exactly sure where you’re coming from. I’m not trying to direct anyone’s reproduction, I’m not trying to influence anything at a population level, and also I’m not really focused on anything about multiple generations.
So I get that you want to do things with the consent of everyone involved, be it the sperm donor, egg donor, or the people who will actually raise the child. This doesn’t preclude thinking about population-level changes or thinking ahead multi-generational consequences.
Even if people don’t explicitly aim for population changes, these might be the emergent effects. It may be individually rational for each person to find the highest trait sperm donors they can find, even if they haven’t all coordinated with each other to do it.
More important though, your noble intentions are going to matter only so much once this tech is the public domain. People of various types of intentions may have access to this. If we have anything resembling markets + democracy + internet + multipolar nuclear world, a maxim that makes some sense to me is “don’t release any tech unless you’re ready for your worst enemies to have access to it as well”. You should assume, for example, that Kim Jong Un also has access to this.
your noble intentions are going to matter only so much once this tech is the public domain
This is somewhat true, yeah. But it’s only somewhat true. E.g.
One can unilaterally make the technology cheaper and more effective. Generally this makes it more widely accessible, which makes it harder for enemies who would want to keep it for themselves to do so. E.g. if inequality were going to be a big resulting problem, you can fix some of it unilaterally in this way.
Some key aspects of the technology will still require large amounts of skill. I’m thinking in particular of polygenic scores. If KJU wants to make an obedience PGS that actually works for the Korean population, he’d have to find a team of geneticists and psychometricians willing to do so. To say it another way, there is a separate cat to be let out of the bag for each trait (roughly speaking) that you might want to select for/against.
I think skill can be stolen via cyberhacking + espionage, assuming you are able to prevent them from just hiring ex employees and ex researchers. The meaningful question for me is how many months of lead time can anyone maintain before they get copied by other nuclear armed countries.
Unless you really find a better plan, my first guess is this is going to lead to an international arms race between multiple countries to develop the most intelligent and politically loyal embryos they possibly can, as fast as they possibly can. The race won’t stop until we hit massively diminishing returns on more investment into both R&D for new genes and raising more children with the genes we already know, and nobody knows how many SDs on which traits we get before this end state.
my first guess is this is going to lead to an international arms race between multiple countries to develop the most intelligent and politically loyal embryos they possibly can, as fast as they possibly can.
I wish there was some more grounded analysis of this sort of thing I could read somewhere. E.g., historical comparisons of other things that states have done with a similar motivation. Or e.g. cases where some technology gets used for good and for evil, and then is it net positive? I feel conversations about what states will do with germline engineering just hit a wall immediately because who knows what will happen.
I think extreme fear of / antipathy towards eugenics is good in part because it constitutes political will to not have states do this sort of thing—controlling people’s reproduction, influencing populations. Accordingly, I advocate for genomic emancipation, which is directly opposed to state eugenics.
I will let you know when I write an article of this type!
In general though, US policy making circles have a long history of applying just enough pressure on other countries so that the frontier of R&D of every emerging field remains in the US. It’s not a coincidence that frontier of quantum computing and genomics and fusion energy and AI and a hundred other technologies all lie in the US.
Sometimes this does lead to war, US military leaders have afaik started wars over who has nuclear weapons, who has chemical weapons and who has oil. But often there’s more subtle levers that can be pulled such as export controls and backchannel collusion with the leading CEOs of that industry.
I don’t at the moment have a list of examples that doesn’t involve the US, but I know they exist and I agree this is worth writing more on.
From my lay perspective w.r.t. international politics, this seems like it would plausibly be good, to be clear. My frontpage says:
Germline engineering will require an international scientific, technological, and social effort, which we encourage and aim to help with. With that said, as the world’s liberal democratic superpower, the United States of America should lead the way on human germline genetic engineering, including enhancement. If America supports this technology while regulating its unsafe and unethical uses, we can show the world how to develop and apply it beneficially.
The US, or at least what the US is supposed to be and isn’t impossibly far from, is a place where you could have strong boundaries preventing the government from restricting genomic liberty, while also supporting the development of the tech.
@TsviBT I don’t know if you were the one who downvoted my comment, but yeah I don’t think you’ve engaged with the strongest version (steelman?) of my critique. Laws (including laws promoting genomic liberty) don’t carry the same weight during a cold war as they do during peacetime. Incentives shape culture, culture shapes laws.
And the incentives change significantly when a technology upsets the fundamental balance of power between the world’s superpowers.
Or maybe you’re arguing “don’t develop any technology” or “don’t develop any powerful technology” because “governments might misuse it”. That’s something you could reasonably argue, but I think you should just argue that in general if that’s what you’re saying, so the case is clearer.
I didn’t downvote any of your comments, and I don’t see any upthread comments with any downvotes!
Anyway, you could steelman your case if you like. It might help if you compared to other technologies, like “We should develop powerful thing X but not superficially similar powerful thing Y because X is much worse given that there are governments”, or something.
I’m not universally arguing against all technology. I’m not even saying that an arms race means this tech is not worth pursuing, just be aware you might be starting an arms race.
Intelligence-enhancing technologies (like superintelligent AI, connectome-mapping for whole brain emulation, human genetic engineering for IQ) are worth studying in a separate bracket IMO because a very small differential in intelligence leads to a very large differential in power (offensive and defensive, scientific and business and political, basically every kind of power).
Yes it’s possible we end up in a world where the US govt is basically competing with its own shadow yet again. US startup builds some tech, it gets copied 6 months later by non-US startup, US startup feels pressure to move faster as a result and deploys next tech, the next tech too gets copied, etc etc.
I’m not saying this will definitely happen, but there’s a bunch of incentives pushing in this direction.
I haven’t made up my mind on whether I endorse human genetic engg, but I have technical doubts:
1. For simple embryonic selection, shouldn’t we consider highest IQ of male embryos rather expected IQ of the embryos?
If I understand correctly, there is a bottleneck on eggs per egg donor, but not as tight a bottleneck on sperm cells per sperm donor. Assume there are 10,000 egg donors with high IQ, 100 eggs per donor mating with 1,000,000 sperm of one sperm donor with high IQ. Out of the 1,000,000 embryos, let’s say the highest IQ embryo grows to childbearing age and is then mated with 100 eggs each of 10k egg donors in the second generation. IQ gain per generation will now be measured as M(10^6)/1.414, which is missing from your table.
2. How do you reason about theoretical IQ that falls outside the range where IQ tests have validity?
Assume my point 1 on simple selection is correct and +3 SD IQ gain per generation is achievable. Assume we start with +3 SD IQ donors. After two generations that’s already a +9 SD IQ adult male. How do you make sense of a number like +9 SD IQ?
3. Is age 18 for sperm and egg donation a legal limit or a biological one?
I’m asking because if it is not a biological limit, I can imagine a govt changing the law for various reasons, for example in order to win an arms race against another country.
When we calculate IQ gain per generation, is this IQ gain per 18 years or IQ gain per 12 years?
Yeah, I don’t know if it makes much sense, and haven’t thought too much about it. A few points:
I don’t know if I actually care too much. Or rather, I think it would be awesome if +9 SD IQ just makes sense somehow, and also we can enable parents to choose that, and also it flows into more generally being insightful. But I think just having tons of people sample from a distribution with a mean of +6 SDs already makes the outlook for humanity significantly better on my view. It’s not remotely the case that every such person will be a great scientist or philosopher; people will have different interests, we shouldn’t be strongly conscripting GE children, and there are likely several other traits quite important for contributing to defending against humanity’s greatest threats (e.g. curiosity, bravery, freethink; attention, determination, drive; wisdom, reflectiveness).
Actually targeting +9 SDs on anything, especially IQ, is potentially quite dangerous and should either be done with great caution after further study, or not at all. See the bullet point “Traits outside the regime of adaptedness”.
But if I speculate:
Some genetic variants will be about “sand in the gears” of the brain. It doesn’t seem crazy to think that you can get a lot of performance by just removing an exceptionally large amount of this. But IDK how much there actually is; kman might have suggested that this isn’t actually much of the genetic variance in IQ.
Some genetic variants will be about “scaling up” (e.g. literally growing a bigger brain, or a more vascularized one, or one with a higher metabolic budget to spend on plasticity, or something like that, IDK). These seem like they plausibly could keep having meaningful effects past the human envelope, but on the other hand could easily hit limits discussed in “Traits outside...”.
Some genetic variants will be about, vaguely, budgeting resources between different neural algorithms. These could easily keep having effects outside the human envelope (e.g., let’s have AN EVEN BIGGER MATH BRAIN CHUNK EVEN THAN EINSTEIN, or what have you). On the other hand, you could very plausibly overshoot, and get some weird or dysfunctional result.
Cf. jimrandomh’s comment about hyperparameters and overshooting here: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk
Got it.
On a technical level, I think more speculating is good before we run the experiment, given that these people if born may very well end up the most powerful people in history. Even small differences in the details could lead to very different futures for humanity.
On a non-technical level, it might be worth writing a post about your stance on the morality and politics of this. So we can separate that whole discussion from the technical discussion.
I don’t super agree with this. But also, I’d view that as somewhat of a failure. Part of why I want the technology to be wideley available is that I think that decreases the path-dependence. Lots of diverse GE kids means a more universalist future, hopefully.
Yeah. I have several articles I want to write, though IDK which will become high-priority enough. Some thoughts on genomic liberty are here: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=ZeranH3yDBGWNxZ7h
Cool!
Have you read meditations on moloch?
My view on this is that even when individuals and countries are not under tight “adapt or die” competition constraints such as during wartime or poverty, everyone faces incentive gradients. Free choices aren’t exactly free. For instance I was “free” to not learn software development and pick a lower paying job, but someone from the outside could still predict with high likelihood I was going to learn software anyway.
I have read it (long ago).
I take the general point that making this technology partially removes a barrier, where previously human influence on children is limited, and afterward there is at least somewhat more influence. E.g. this could lead to:
Sacrificing wellbeing for competitiveness
Social pressure / “soft eugenics”
Competitive selection (where I mentioned the Meditations)
One point of comparison is the default. There is a human-evolution that is always happening. Do we like its results? Do we trust it?
Another thing to point out is that the barrier is only somewhat eroded. Except for whole genome synthesis, the amount of control that germline engineering is fairly small compared to the total genetic and phenotypic variation in humans. You and I have 5 or 10 million differing alleles between us; GV would have an effect that’s comparable to, say, 1000s of alleles. (This doesn’t directly make sense for selection, but morally speaking.) In terms of phenotypes, most of the variation would still be in uncontrolled genomic differences and non-genetic differences. Current IQ PGSes explain <20% of the variance in IQ. Now, to some extent I can’t have it both ways; either the benefits of GE are enormous because we’re controlling traits somewhat, or we aren’t controlling traits much and the benefits can’t be that big. But think, for example, of shifting the mean of your kid’s expected IQ, without much shifting the variance. (For some disease traits you can drive the probability of the disease far down, which is a lot of phenotypic control; but that seems not so bad!)
I’m glad you’re thinking about it.
I would still encourage you to forecast what capabilities look like not just as of 2025, but after a trillion dollars of R&D enter this space. Mobilising a trillion dollars for a field of such importance is not difficult, once successful clinical results are out. All your claims about mean and variance, or about whole genome synthesis being possible, will no longer apply afaik.
There’s something really off about the frame of your question. I’m not exactly sure where you’re coming from. I’m not trying to direct anyone’s reproduction, I’m not trying to influence anything at a population level, and also I’m not really focused on anything about multiple generations.
Hmm
So I get that you want to do things with the consent of everyone involved, be it the sperm donor, egg donor, or the people who will actually raise the child. This doesn’t preclude thinking about population-level changes or thinking ahead multi-generational consequences.
Even if people don’t explicitly aim for population changes, these might be the emergent effects. It may be individually rational for each person to find the highest trait sperm donors they can find, even if they haven’t all coordinated with each other to do it.
More important though, your noble intentions are going to matter only so much once this tech is the public domain. People of various types of intentions may have access to this. If we have anything resembling markets + democracy + internet + multipolar nuclear world, a maxim that makes some sense to me is “don’t release any tech unless you’re ready for your worst enemies to have access to it as well”. You should assume, for example, that Kim Jong Un also has access to this.
This is somewhat true, yeah. But it’s only somewhat true. E.g.
One can unilaterally make the technology cheaper and more effective. Generally this makes it more widely accessible, which makes it harder for enemies who would want to keep it for themselves to do so. E.g. if inequality were going to be a big resulting problem, you can fix some of it unilaterally in this way.
Some key aspects of the technology will still require large amounts of skill. I’m thinking in particular of polygenic scores. If KJU wants to make an obedience PGS that actually works for the Korean population, he’d have to find a team of geneticists and psychometricians willing to do so. To say it another way, there is a separate cat to be let out of the bag for each trait (roughly speaking) that you might want to select for/against.
I think skill can be stolen via cyberhacking + espionage, assuming you are able to prevent them from just hiring ex employees and ex researchers. The meaningful question for me is how many months of lead time can anyone maintain before they get copied by other nuclear armed countries.
Unless you really find a better plan, my first guess is this is going to lead to an international arms race between multiple countries to develop the most intelligent and politically loyal embryos they possibly can, as fast as they possibly can. The race won’t stop until we hit massively diminishing returns on more investment into both R&D for new genes and raising more children with the genes we already know, and nobody knows how many SDs on which traits we get before this end state.
I wish there was some more grounded analysis of this sort of thing I could read somewhere. E.g., historical comparisons of other things that states have done with a similar motivation. Or e.g. cases where some technology gets used for good and for evil, and then is it net positive? I feel conversations about what states will do with germline engineering just hit a wall immediately because who knows what will happen.
I think extreme fear of / antipathy towards eugenics is good in part because it constitutes political will to not have states do this sort of thing—controlling people’s reproduction, influencing populations. Accordingly, I advocate for genomic emancipation, which is directly opposed to state eugenics.
I will let you know when I write an article of this type!
In general though, US policy making circles have a long history of applying just enough pressure on other countries so that the frontier of R&D of every emerging field remains in the US. It’s not a coincidence that frontier of quantum computing and genomics and fusion energy and AI and a hundred other technologies all lie in the US.
Sometimes this does lead to war, US military leaders have afaik started wars over who has nuclear weapons, who has chemical weapons and who has oil. But often there’s more subtle levers that can be pulled such as export controls and backchannel collusion with the leading CEOs of that industry.
I don’t at the moment have a list of examples that doesn’t involve the US, but I know they exist and I agree this is worth writing more on.
From my lay perspective w.r.t. international politics, this seems like it would plausibly be good, to be clear. My frontpage says:
The US, or at least what the US is supposed to be and isn’t impossibly far from, is a place where you could have strong boundaries preventing the government from restricting genomic liberty, while also supporting the development of the tech.
@TsviBT I don’t know if you were the one who downvoted my comment, but yeah I don’t think you’ve engaged with the strongest version (steelman?) of my critique. Laws (including laws promoting genomic liberty) don’t carry the same weight during a cold war as they do during peacetime. Incentives shape culture, culture shapes laws.
And the incentives change significantly when a technology upsets the fundamental balance of power between the world’s superpowers.
Or maybe you’re arguing “don’t develop any technology” or “don’t develop any powerful technology” because “governments might misuse it”. That’s something you could reasonably argue, but I think you should just argue that in general if that’s what you’re saying, so the case is clearer.
I didn’t downvote any of your comments, and I don’t see any upthread comments with any downvotes!
Anyway, you could steelman your case if you like. It might help if you compared to other technologies, like “We should develop powerful thing X but not superficially similar powerful thing Y because X is much worse given that there are governments”, or something.
Okay!
I’m not universally arguing against all technology. I’m not even saying that an arms race means this tech is not worth pursuing, just be aware you might be starting an arms race.
Intelligence-enhancing technologies (like superintelligent AI, connectome-mapping for whole brain emulation, human genetic engineering for IQ) are worth studying in a separate bracket IMO because a very small differential in intelligence leads to a very large differential in power (offensive and defensive, scientific and business and political, basically every kind of power).
Yes it’s possible we end up in a world where the US govt is basically competing with its own shadow yet again. US startup builds some tech, it gets copied 6 months later by non-US startup, US startup feels pressure to move faster as a result and deploys next tech, the next tech too gets copied, etc etc.
I’m not saying this will definitely happen, but there’s a bunch of incentives pushing in this direction.