People who want genius superbabies: how worried are you about unintended side effects of genetic interventions on personality?
Even if we assume genetically modified babies will all be very healthy and smart on paper, genes that are correlated with intelligence might affect hard-to-measure but important traits. For example, they might alter aesthetic taste, emotional capacity, or moral/philosophical intuitions. From the subjective perspective of an unmodified human, these changes are likely to be “for the worse.”
If you pick your child’s genes to maximize their IQ (or any other easily-measurable metric), you might end up with the human equivalent of a benchmaxxed LLM with amazing test scores but terrible vibes.
I’d be hesitant to hand off the future to any successors which are super far off distribution from baseline humans. Once they exist, we obviously can’t just take them back. And in the case of superbabies, we’d have to wait decades to find out what they’re like once they’ve grown up.
Antagonistic pleiotropy with unmeasured traits. Some crucial traits, such as what is called Wisdom and what is called Kindness, might not be feasibly measurable with a PGS and therefore can’t be used as a component in a weighted mixture of PGSes used for genomic engineering. If there is antagonistic pleiotropy between those traits and traits selected for by GE, they’ll be decreased.
A related issue is that intelligence itself could affect personality:
Even if a trait is accurately measured by a PGS and successfully increased by GE, the trait may have unmapped consequences, and thus may be undesirable to the parents and/or to the child. For example, enhancing altruistic traits might set the child up to be exploited by unscrupulous people.
An example with intelligence is that very intelligent people might tend to be isolated, or might tend to be overconfident (because of not being corrected enough).
One practical consideration is that sometimes PGSes are constructed by taking related phenotypes and just using those because they correlate. The big one for IQ is Educational Attainment, because EA is easier to measure than IQ (you just ask about years of schooling or whatever). If you do this in the most straightforward way, you’re just selecting for EA, which would probably select for several personality traits, some maybe undesirable.
I think in practice these effects will probably be pretty small and not very concerning, though we couldn’t know for sure without trying and seeing. A few lines of reasoning:
Correlations between IQ and known personality traits are either very small or pretty small. You could look at https://en.wikipedia.org/wiki/Intelligence_and_personality . The numbers there are usually less than .3 or even .2, in absolute value. If a correlation is .25, that means 4 SDs of IQ translates to 1 SD on that trait (a priori). That means a 1 in 20,000 exceptionally smart kid is 1 in 6 exceptional on that trait. You could notice, but I think it would be mild. Of course, this could be different for unknown personality traits; but scientists IIUC do try to find general factors in tests, so it would have to be something that doesn’t show up there.
Most trait correlations in general seem to be quite small. See https://www.youtube.com/watch?v=n64rrRPtCa8&t=1620s Of course this is determined by what traits we’re talking about, and as we see above, the traits are correlated. But what this says to me is that even highly polygenic traits that are vaguely related (e.g. varous health things; or, intelligence vs. mental illness) can easily be mostly disjoint—in fact by default they usually are. In other words, if there’s a significant correlation between two traits, I would guess that it’s not so much “really due to pleiotropy”, but rather due to the traits being somehow actually overlapping. I think that would suggest you get roughly the same sort of distribution as you see empirically today; in other words, there wouldn’t be surprise genetic pleiotropy. (I’m not sure this argument makes sense, I haven’t thought about it much.)
There’s a huge amount of genetic variation in IQ to select from. See https://tsvibt.blogspot.com/2022/08/the-power-of-selection.html#7-the-limits-of-selection . This means that there’s actually a huge range of ways to add 50 IQ points by making genetic tweaks. Just to illustrate the point with fake numbers, suppose that IQ is the sum of 10,000 fair coin flips (some genetic, some environmental); a standard deviation is then 50. And suppose we know 1000 of them. That’s already 1000 / 50 = 20 SDs! There’s a lot of ways to pick 150 from 1000, and there’s still a lot of ways if you enforce some substantial disoverlap between all pairs of subsets.
From the subjective perspective of an unmodified human, these changes are likely to be “for the worse.”
If you pick your child’s genes to maximize their IQ (or any other easily-measurable metric), you might end up with the human equivalent of a benchmaxxed LLM with amazing test scores but terrible vibes.
I’m not sure I follow. I mean I vaguely get it, but I don’t non-vaguely get it.
And in the case of superbabies, we’d have to wait decades to find out what they’re like once they’ve grown up.
I don’t think this is right. If we’re talking about selection (rather than editing), the child has a genome that is entirely natural, except that it’s selected according to your PGS to be exceptional on that PGS. This should be basically exactly the same as selecting someone who is exceptional on your PGS from the population of living people. So you could just look at the tails of your PGS in the population and see what they’re like. (This does become hard with traits that are rare / hard / expensive to measure, and it’s hard if you’re interested in far tails, like >3 SDs say.) (In general, tail studies seem underattended; see https://www.lesswrong.com/posts/i4CZ57JyqqpPryoxg/some-reprogenetics-related-projects-you-could-help-with , though also see https://pmc.ncbi.nlm.nih.gov/articles/PMC12176956/ which might be some version of this (for other traits).)
Once they exist, we obviously can’t just take them back.
Why… can’t we take them back? I don’t think you should kill them, but regression to the mean seems like it takes care of most of the effects in one generation.
how worried are you about unintended side effects of genetic interventions on personality?
A reasonable amount. Like, much less than I am worried about AI systems being misaligned, but still some. In my ideal world humanity would ramp up something like +10 IQ points of selection per generation, and so would get to see a lot of evidence about how things play out here.
Well, now that I think about it, I’m not sure what scenario I should be imagining here.
Scenario 1: if genetic interventions became popular enough that the entire world were getting 10 IQ points smarter each year (EDIT: sorry, I meant “generation”), as you say, then it seems obvious to me that you’d be unable to take it back. Surely the first generation of superbabies would want future generations to be like them. If their parents say “actually, y’all were a mistake, let our grandchildren mean-regress and be normal please,” they’d simply refuse.
Scenario 2: more realistically IMO, we start with a generation of a few thousand superbabies, who are the children of rationalist-type people who really care about intelligence. Maybe these people grow up very smart but very weird, and they are unable to shape society to their weird preferences because there aren’t that many of them.
But wait, many people view these genetic interventions as our best hope to save the world… Do we expect that the superbabies are going to be smart enough to make the critical difference in solving AI alignment, but we don’t expect they’ll gain enough influence to significantly affect society’s future values? Seems unlikely to me.
For better or for worse, the second scenario is basically already playing out—you have people like Elon Musk and Mark Zuckerberg who got their power by being very smart, who now get to shape the world in their own weird ways. Powerful people are already optimized for being intelligent via selection effects; genetic optimization would just be another layer on top of that.
I expect the sloptimization of the children to happen more or less by default in the superbaby scenario, but less due to antagonistic pleiotropy and more due to explicit and intense selection by most parents against autism/bipolar/schizophrenia/etc.
This is purely anecdotal and experiences may differ (I am not trying to make a quantitative claim): most of the most brilliant and creative people I’ve ever met have a personal or family history of at least one of those illnesses. This kind of selection may leave the average child better off, but (I fear) at the cost of tail effects depriving humanity of a precious corner of mind space.
I’m not really worried given the expected effect size. e.g. let’s say by the time I’d be ready to use such tech, the best known interventions have an EV of +10 IQ points. Well the world already has a significant number of humans with 10 more IQ points than me (and a significant number in a range on both sides of that); obviously I haven’t done real studies on this but my vague impression looking around at those existing people is that the extra IQ points don’t on average trade off against other things I care about. (It’s possible that I’m wrong about existing humans, or that I’m right but that tech selecting for IQ points does involve a trade-off I wouldn’t accept if I knew about it.)
I haven’t thought much about how worried I’d be if we’re talking about much more extreme IQ gains. Certainly in the limit, if you told me my kid would have +100000 IQ points, I mean my first response is that you’re lying, but yeah if I believed it I’d be worried for similar reasons that one worries about ASI.
Also I realize I was talking about (at expected effect size) whether there’s a trade-off at all to gaining more IQ, but to be clear even if there is a trade-off (or uncertainty about the trade-off) it is still probably worth it up to some point—I certainly don’t think that we should only augment intelligence if we can prove it is literally costless.
People who want genius superbabies: how worried are you about unintended side effects of genetic interventions on personality?
Even if we assume genetically modified babies will all be very healthy and smart on paper, genes that are correlated with intelligence might affect hard-to-measure but important traits. For example, they might alter aesthetic taste, emotional capacity, or moral/philosophical intuitions. From the subjective perspective of an unmodified human, these changes are likely to be “for the worse.”
If you pick your child’s genes to maximize their IQ (or any other easily-measurable metric), you might end up with the human equivalent of a benchmaxxed LLM with amazing test scores but terrible vibes.
I’d be hesitant to hand off the future to any successors which are super far off distribution from baseline humans. Once they exist, we obviously can’t just take them back. And in the case of superbabies, we’d have to wait decades to find out what they’re like once they’ve grown up.
It’s a concern. Several related issues are mentioned here: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html E.g. search “personality” and “values”, and see:
A related issue is that intelligence itself could affect personality:
An example with intelligence is that very intelligent people might tend to be isolated, or might tend to be overconfident (because of not being corrected enough).
One practical consideration is that sometimes PGSes are constructed by taking related phenotypes and just using those because they correlate. The big one for IQ is Educational Attainment, because EA is easier to measure than IQ (you just ask about years of schooling or whatever). If you do this in the most straightforward way, you’re just selecting for EA, which would probably select for several personality traits, some maybe undesirable.
I think in practice these effects will probably be pretty small and not very concerning, though we couldn’t know for sure without trying and seeing. A few lines of reasoning:
Correlations between IQ and known personality traits are either very small or pretty small. You could look at https://en.wikipedia.org/wiki/Intelligence_and_personality . The numbers there are usually less than .3 or even .2, in absolute value. If a correlation is .25, that means 4 SDs of IQ translates to 1 SD on that trait (a priori). That means a 1 in 20,000 exceptionally smart kid is 1 in 6 exceptional on that trait. You could notice, but I think it would be mild. Of course, this could be different for unknown personality traits; but scientists IIUC do try to find general factors in tests, so it would have to be something that doesn’t show up there.
Most trait correlations in general seem to be quite small. See https://www.youtube.com/watch?v=n64rrRPtCa8&t=1620s Of course this is determined by what traits we’re talking about, and as we see above, the traits are correlated. But what this says to me is that even highly polygenic traits that are vaguely related (e.g. varous health things; or, intelligence vs. mental illness) can easily be mostly disjoint—in fact by default they usually are. In other words, if there’s a significant correlation between two traits, I would guess that it’s not so much “really due to pleiotropy”, but rather due to the traits being somehow actually overlapping. I think that would suggest you get roughly the same sort of distribution as you see empirically today; in other words, there wouldn’t be surprise genetic pleiotropy. (I’m not sure this argument makes sense, I haven’t thought about it much.)
There’s a huge amount of genetic variation in IQ to select from. See https://tsvibt.blogspot.com/2022/08/the-power-of-selection.html#7-the-limits-of-selection . This means that there’s actually a huge range of ways to add 50 IQ points by making genetic tweaks. Just to illustrate the point with fake numbers, suppose that IQ is the sum of 10,000 fair coin flips (some genetic, some environmental); a standard deviation is then 50. And suppose we know 1000 of them. That’s already 1000 / 50 = 20 SDs! There’s a lot of ways to pick 150 from 1000, and there’s still a lot of ways if you enforce some substantial disoverlap between all pairs of subsets.
Glancing at the correlations given in the wiki page ( https://en.wikipedia.org/wiki/Intelligence_and_personality ) I don’t especially feel that way.
I’m not sure I follow. I mean I vaguely get it, but I don’t non-vaguely get it.
I don’t think this is right. If we’re talking about selection (rather than editing), the child has a genome that is entirely natural, except that it’s selected according to your PGS to be exceptional on that PGS. This should be basically exactly the same as selecting someone who is exceptional on your PGS from the population of living people. So you could just look at the tails of your PGS in the population and see what they’re like. (This does become hard with traits that are rare / hard / expensive to measure, and it’s hard if you’re interested in far tails, like >3 SDs say.) (In general, tail studies seem underattended; see https://www.lesswrong.com/posts/i4CZ57JyqqpPryoxg/some-reprogenetics-related-projects-you-could-help-with , though also see https://pmc.ncbi.nlm.nih.gov/articles/PMC12176956/ which might be some version of this (for other traits).)
Why… can’t we take them back? I don’t think you should kill them, but regression to the mean seems like it takes care of most of the effects in one generation.
A reasonable amount. Like, much less than I am worried about AI systems being misaligned, but still some. In my ideal world humanity would ramp up something like +10 IQ points of selection per generation, and so would get to see a lot of evidence about how things play out here.
Well, now that I think about it, I’m not sure what scenario I should be imagining here.
Scenario 1: if genetic interventions became popular enough that the entire world were getting 10 IQ points smarter each year (EDIT: sorry, I meant “generation”), as you say, then it seems obvious to me that you’d be unable to take it back. Surely the first generation of superbabies would want future generations to be like them. If their parents say “actually, y’all were a mistake, let our grandchildren mean-regress and be normal please,” they’d simply refuse.
Scenario 2: more realistically IMO, we start with a generation of a few thousand superbabies, who are the children of rationalist-type people who really care about intelligence. Maybe these people grow up very smart but very weird, and they are unable to shape society to their weird preferences because there aren’t that many of them.
But wait, many people view these genetic interventions as our best hope to save the world… Do we expect that the superbabies are going to be smart enough to make the critical difference in solving AI alignment, but we don’t expect they’ll gain enough influence to significantly affect society’s future values? Seems unlikely to me.
For better or for worse, the second scenario is basically already playing out—you have people like Elon Musk and Mark Zuckerberg who got their power by being very smart, who now get to shape the world in their own weird ways. Powerful people are already optimized for being intelligent via selection effects; genetic optimization would just be another layer on top of that.
I expect the sloptimization of the children to happen more or less by default in the superbaby scenario, but less due to antagonistic pleiotropy and more due to explicit and intense selection by most parents against autism/bipolar/schizophrenia/etc.
This is purely anecdotal and experiences may differ (I am not trying to make a quantitative claim): most of the most brilliant and creative people I’ve ever met have a personal or family history of at least one of those illnesses. This kind of selection may leave the average child better off, but (I fear) at the cost of tail effects depriving humanity of a precious corner of mind space.
I’m not really worried given the expected effect size. e.g. let’s say by the time I’d be ready to use such tech, the best known interventions have an EV of +10 IQ points. Well the world already has a significant number of humans with 10 more IQ points than me (and a significant number in a range on both sides of that); obviously I haven’t done real studies on this but my vague impression looking around at those existing people is that the extra IQ points don’t on average trade off against other things I care about. (It’s possible that I’m wrong about existing humans, or that I’m right but that tech selecting for IQ points does involve a trade-off I wouldn’t accept if I knew about it.)
I haven’t thought much about how worried I’d be if we’re talking about much more extreme IQ gains. Certainly in the limit, if you told me my kid would have +100000 IQ points, I mean my first response is that you’re lying, but yeah if I believed it I’d be worried for similar reasons that one worries about ASI.
Also I realize I was talking about (at expected effect size) whether there’s a trade-off at all to gaining more IQ, but to be clear even if there is a trade-off (or uncertainty about the trade-off) it is still probably worth it up to some point—I certainly don’t think that we should only augment intelligence if we can prove it is literally costless.