I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less.
Not at all obvious to me this is true. Do you mean to say a lot of people would, or just some small fraction, and you think a small fraction is enough to worry?
After I finish my methods article, I want to lay out a basic picture of genomic emancipation. Genomic emancipation means making genomic liberty a right and a practical option. In my vision, genomic liberty is quite broad: it would include for example that parents should be permitted and enabled to choose:
to enhance their children (e.g. supra-normal health; IQ at the outer edges of the human envelope); and/or
to propagate their own state even if others would object (e.g. blind people can choose to have blind children); and/or
to make their children more normal even if there’s no clear justification through beneficence (I would go so far as to say that, for example, parents can choose to make their kid have a lower IQ than a random embryo from the parents would be in expectation, if that brings the kid closer to what’s normal).
These principles are more narrow than general genomic liberty (“parents can do whatever they please”), and I think have stronger justifications. I want to make these narrower “tentpole” principles inside of the genomic liberty tent, because the wider principle isn’t really tenable, in part for the reasons you bring up. There are genomic choices that should be restricted—perhaps by law, or by professional ethics for clinicians, or by avoiding making it technically feasible, or by social stigma. (The implementation seems quite tricky; any compromise of full genomic liberty does come with costs as well as preventing costs. And at least to some small extent, it erodes the force of genomic liberty’s contraposition to eugenics, which seeks to impose population-wide forces on individual’s procreative choice.)
Examples:
As you say, if there’s a very high risk of truly egregious behavior, that should be pushed against somehow.
Example: People should not make someone who is 170 Disagreeable Quotient and 140 Unconscientiousness Quotient, because that is most of the way to being a violent psychopath.
Counterexample: People should, given good information, be able to choose to have a kid who is 130 Disagreeable Quotient and 115 Unconscientiousness Quotient, because, although there might be associated difficulties, that’s IIUC a personality profile enriched with creative genius.
People should not be allowed to create children with traits specifically designed to make the children suffer. (Imagine for instance a parent who thinks that suffering, in itself, builds character or makes you productive or something.)
Another thing to point out is that to a significant degree, in the longer-term, many of these things should self-correct, through the voice of the children (e.g. if a deaf kid grows up and starts saying “hey, listen, I love my parents and I know they wanted what was best for me, but I really don’t like that I didn’t get to hear music and my love’s voice until I got my brain implant, please don’t do the same for your kid”), and through seeing the results in general. If someone is destructively ruthless, it’s society’s job to punish them, and it’s parents’s job to say “ah, that is actually not good”.
In that case I’d repeat GeneSmith’s point from another comment: “I think we have a huge advantage with humans simply because there isn’t the same potential for runaway self-improvement.” If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don’t expect the ruthless ones to get a big advantage.
I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there’s some easy way to destroy the world that becomes accessible with IQ > X, and we make a bunch of people with IQ > X, and a small fraction of them want to destroy the world for some reason, are the rest able to prevent it? This is sort of already the situation we’re in with AI: we look to be above the threshold of “ability to summon ASI”, but not above the threshold of “ability to steer the outcome”. In the case of AI, I expect making people smarter differentially speeds up alignment over capabilities: alignment is hard and we don’t know how to do it, while hill-climbing on capabilities is relatively easy and we already know how to do it.
I should also note that we have the option of concentrating early adoption among nice, sane, x-risk aware people (though I also find this kind of cringe in a way and predict this would be an unpopular move). I expect this to happen by default to some extent.
Not at all obvious to me this is true. Do you mean to say a lot of people would, or just some small fraction, and you think a small fraction is enough to worry?
I should have clarified, I meant a small fraction and that that is enough to worry.
After I finish my methods article, I want to lay out a basic picture of genomic emancipation. Genomic emancipation means making genomic liberty a right and a practical option. In my vision, genomic liberty is quite broad: it would include for example that parents should be permitted and enabled to choose:
to enhance their children (e.g. supra-normal health; IQ at the outer edges of the human envelope); and/or
to propagate their own state even if others would object (e.g. blind people can choose to have blind children); and/or
to make their children more normal even if there’s no clear justification through beneficence (I would go so far as to say that, for example, parents can choose to make their kid have a lower IQ than a random embryo from the parents would be in expectation, if that brings the kid closer to what’s normal).
These principles are more narrow than general genomic liberty (“parents can do whatever they please”), and I think have stronger justifications. I want to make these narrower “tentpole” principles inside of the genomic liberty tent, because the wider principle isn’t really tenable, in part for the reasons you bring up. There are genomic choices that should be restricted—perhaps by law, or by professional ethics for clinicians, or by avoiding making it technically feasible, or by social stigma. (The implementation seems quite tricky; any compromise of full genomic liberty does come with costs as well as preventing costs. And at least to some small extent, it erodes the force of genomic liberty’s contraposition to eugenics, which seeks to impose population-wide forces on individual’s procreative choice.)
Examples:
As you say, if there’s a very high risk of truly egregious behavior, that should be pushed against somehow.
Example: People should not make someone who is 170 Disagreeable Quotient and 140 Unconscientiousness Quotient, because that is most of the way to being a violent psychopath.
Counterexample: People should, given good information, be able to choose to have a kid who is 130 Disagreeable Quotient and 115 Unconscientiousness Quotient, because, although there might be associated difficulties, that’s IIUC a personality profile enriched with creative genius.
People should not be allowed to create children with traits specifically designed to make the children suffer. (Imagine for instance a parent who thinks that suffering, in itself, builds character or makes you productive or something.)
Case I’m unsure about, needs more investigation: Autism plus IQ might be associated with increased suicidal ideation (https://www.sciencedirect.com/science/article/abs/pii/S1074742722001228). Not sure what the implication should be.
Another thing to point out is that to a significant degree, in the longer-term, many of these things should self-correct, through the voice of the children (e.g. if a deaf kid grows up and starts saying “hey, listen, I love my parents and I know they wanted what was best for me, but I really don’t like that I didn’t get to hear music and my love’s voice until I got my brain implant, please don’t do the same for your kid”), and through seeing the results in general. If someone is destructively ruthless, it’s society’s job to punish them, and it’s parents’s job to say “ah, that is actually not good”.
In that case I’d repeat GeneSmith’s point from another comment: “I think we have a huge advantage with humans simply because there isn’t the same potential for runaway self-improvement.” If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don’t expect the ruthless ones to get a big advantage.
I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there’s some easy way to destroy the world that becomes accessible with IQ > X, and we make a bunch of people with IQ > X, and a small fraction of them want to destroy the world for some reason, are the rest able to prevent it? This is sort of already the situation we’re in with AI: we look to be above the threshold of “ability to summon ASI”, but not above the threshold of “ability to steer the outcome”. In the case of AI, I expect making people smarter differentially speeds up alignment over capabilities: alignment is hard and we don’t know how to do it, while hill-climbing on capabilities is relatively easy and we already know how to do it.
I should also note that we have the option of concentrating early adoption among nice, sane, x-risk aware people (though I also find this kind of cringe in a way and predict this would be an unpopular move). I expect this to happen by default to some extent.