I don’t understand your position. My position is: 1. Higher intelligence and health and happiness-set-point are unambiguous good directions for the genome to go. Blindness and dwarfism are unambiguous bad directions for the genome to go. 2. Therefore, the statement “there are no unambiguous good directions for genomes to go” is false. 3. Since the statment is false, it is a bad argument.
Which step of this chain, specifically, do you disagree with? It sounds like you disagree with the first point.
But then you say the fact that “any reasonable cost-benefit analysis will find that intelligence and health and high happiness-set-point are good” is irrelevant to your argument.
So it seems your argument is “even if all reasonable cost-benefit analyses agree, things are still ambiguous”. Is that really your position?
So it seems your argument is “even if all reasonable cost-benefit analyses agree, things are still ambiguous”. Is that really your position?
Yeah. Well, we’re being vague about “reasonable”.
If by “reasonable” you mean “in practice, no one could, given a whole month of discussion, argue me into thinking otherwise”, then I think it’s still ambiguous even if all reasonable CBAs agree.
If by “reasonable” you mean “anyone of sound mind doing a CBA would come to this conclusion”, then no, it wouldn’t be ambiguous. But I also wouldn’t say that it should be protected. Basically by assumption, what we’re protecting is genomic liberty of parents; we’re discussing the case where a blind parent of sound mind, having been well-informed by their clinic of the consequences and perhaps given an enforced period of reflection, and hopefully having consulting with their peers, has decided to make their child blind.
If there’s no parent of sound mind making such a decision, then there’s no question of policy that we have to resolve. If there is, then I’m saying in most cases (with some recognized exceptions) it’s ambiguous.
By “reasonable” I meant “is consistent with near-universal human values”. For instance, humans near-universally value intelligence, happiness, and health. If an intervention decreases these, without corresponding benefits to other things humans value, then the intervention is unambiguously bad.
Instead of “the principle of genomic liberty”, I would prefer a “do no harm” principle. If you don’t want to do gene editing, that’s fine. If you do gene editing, you cannot make edits that, on average, your children will be unhappy about. Take the following cases: 1. Parents want to genetically modify their child from an IQ of 130 to an IQ of 80. 2. Parents want to genetically modify their child to be blind.[1] 3. Parents want to genetically modify their child to have persistent mild depression.[2]
People generally prefer to be intelligent and happy and healthy. Most people who have low intelligence or are blind or depressed wish things were otherwise. Therefore, such edits would be illegal.
(There may be some cases where “children are happy about the changes on net after the fact” is not restrictive enough. For instance, suppose a cult genetically engineers its children to be extremely religious and extremely obedient, and then tells them that disobedience will result in eternal torment in the afterlife. These children will be very happy that they were edited to be obedient.)
A concrete example of where I disagree with the “principle of genomic liberty”: Down syndrome removes ~50 IQ points. The principle of genomic liberty would give a Down syndrome parent with an IQ of 90 the right to give a 140 IQ embryo Down syndrome, reducing the embryo’s IQ to 90 (this is allowed because 90 IQ is not sufficient to render someone non compos mentis).
I’m still unclear how much we’re talking past each other. In this part, are you suggesting this as law enforced by the state? Note that this is NOT the same as
For instance, humans near-universally value intelligence, happiness, and health. If an intervention decreases these, without corresponding benefits to other things humans value, then the intervention is unambiguously bad.
because you could have an intervention that does result in less happiness on average, but also has some other real benefit; but isn’t this doing some harm? Does it fall under “do no harm”?
And as always, the question here is, “Who decides what harm is?”.
(There may be some cases where “children are happy about the changes on net after the fact” is not restrictive enough. For instance, suppose a cult genetically engineers its children to be extremely religious and extremely obedient, and then tells them that disobedience will result in eternal torment in the afterlife. These children will be very happy that they were edited to be obedient.)
Yes, I agree, and in fact specifically brought up (half of) this case in the exclusion for permanent silencing. Quoting:
For example, it could be acceptable to ban genomic choices that would make a future child supranormally obedient, to the point where they are very literally incapable of communicating something they have not been told to communicate. [...]
You write:
Down syndrome removes ~50 IQ points. The principle of genomic liberty would give a Down syndrome parent with an IQ of 90 the right to give a 140 IQ embryo Down syndrome, reducing the embryo’s IQ to 90 (this is allowed because 90 IQ is not sufficient to render someone non compos mentis).
In practice, my guess is that this would pose a quite significant risk of making the child non compos mentis, and therefore unable to sufficiently communicate their wellbeing; so it would be excluded from protection. But in theory, yes, we have a disagreement here. If the parent is compos mentis, then who the hell are you to say they can’t have a child like themselves?
For instance, humans near-universally value intelligence,
How many people have you talked to about this topic? Lots of people I talk to value intelligence and would want to give their future kid intelligence; lots of people value it but say they wouldn’t want to influence; some people say they don’t value; and some even say they anti-value it (e.g. preferring their kid to be more normal).
I’m not sure how to communicate across a gap here… There’s a thing that it seems like you don’t understand, that you should understand, about law, the state, freedom, coercion, etc. There’s a big injustice in imposing your will on others, and you don’t seem to mind this. This principle of injustice is far from absolute; I endorse lots of impositions, e.g. no gouging out your child’s eyes. But you seem to just not mind about being like “ok, hm, which ways of living are good, ok, this is good and this is good, this is bad and this is bad, OK GUYS I FIGURED IT OUT, you may do X and you may not do Y, that is the law, I have spoken”. Maybe I’m missing you, but that’s what it sounds like. And I just don’t think this is how the law is supposed to work.
There is totally a genuine tough issue here, where the law should have some interest in protecting everyone, including young children from their parents, and yes to some extent even future children. But I feel our communication is dancing around this, where maybe you just don’t agree that the law should be very reluctant to impose?
The topic has drifted from my initial point, which is that there exist some unambiguous “good” directions for genomes to go. After reading your proposed policy it looks like you concede this point, since you are happy to ban gene editing that causes severe mental disability, major depression, etc. Therefore, you seem to agree that going from “chronic severe depression” to “typical happiness set point” is an unambiguous good change. (Correct me if I am wrong here.)
I haven’t thought through the policy questions at any great length. Actually, I made up all my policy positions on the fly while talking to you. And I haven’t thought about the coalition-building aspect at all. But my current position is that, if we had a highly competent government that could be trusted to reasonably interpret the rules, I would want them to enforce the following:
Don’t allow unambiguous net harm. (Reasonable tradeoffs are fine. Err on the side of permissiveness.)
The best experts on whether “unambiguous net harm” was done are the people who were edited.
Although in rare cases we may have to overrule them, such as the cult example above. This is especially the case if cult members have objectively bad outcomes (e.g., high rates of depression and suicide) despite claiming to be happy.
If we have high confidence that the edited people will have regrets (e.g. based on observing existing people with the condition) we can prohibit the edits without running the experiment. Allowing “unambiguous net harm” edits to be performed for a generation has a high cost.
In some cases I am more permissive than you are. I don’t think we have enough evidence to determine that removing the emotion of fear is “unambiguous net harm”, but it would be prohibited under your “no removing a core aspect of humanity” exception. (Perhaps a generation from now we would have enough data to justify banning it under my rules. But I suspect it has sufficient upside to remain legal.)
Brief reactions to things you said:
some even say they anti-value it [intelligence]
I think a lot of people who say they anti-value intelligence are coping (I am dumb therefore dumbness is a virtue) or being tribalistic (I hate nerdy people who wear glasses, they remind me of the outgroup). If they perceived their ingroup and themselves as being intelligent, I think they would change their tune.
Also, intelligent people strongly value intelligence. And since they are smarter, we should weight their opinions more heavily :P
There’s a big injustice in imposing your will on others
In this case, we are preventing the parents from imposing their will on the future child.
If the parent is compos mentis, then who the hell are you to say they can’t have a child like themselves?
I am the Law, the Night Watchman State, the protector of innocents who cannot protect themselves. Your children cannot prevent you from editing their genes in a way that harms them, but the law can and should.
if we had a highly competent government that could be trusted to reasonably interpret the rules,
Yeah, if this is the sort of thing you’re imagining, we’re just making a big different background assumption here.
I don’t think we have enough evidence to determine that removing the emotion of fear is “unambiguous net harm”, but it would be prohibited under your “no removing a core aspect of humanity” exception.
Yeah, on a methodological level, you’re trying to do a naive straightforward utilitarian consequentialist thing, maybe? And I’m like, this isn’t how justice and autonomy and the law work, it’s not how politics and public policy works, it’s not how society and cosmopolitanism work. (In this particular case, my justification about human dignity maybe doesn’t immediately make sense to you, but I think that not understanding the justification is a failure on your part—the justification might ultimately be wrong, I’m not at all confident, but it’s a real justification. See for example “What’s really wrong with genetice nhancement: a second look at our posthuman future”.)
Therefore, you seem to agree that going from “chronic severe depression” to “typical happiness set point” is an unambiguous good change. (Correct me if I am wrong here.)
No, this is going too far. The exception there would be for a medium / high likelihood of really bad depression, like “I can’t bring myself to work on anything for any sustained time, even stuff that’s purely for fun, I think about killing myself all the time for years and years, I am suffering greatly every day, I take no joy in anything and have no hope”, that kind of thing. Going from “once in a while gets pretty down for a few weeks, has to take a bit of time off work and be sad in bed” is probably fine, and probably has good aspects, even if it is net-bad / net-dispreferable for most people and is somewhat below typical happiness set-point. Mild high-functioning bipolar might be viewed by some people with that condition as important to who they are, and a source of strength and creativity. Or something, I don’t know. Decreasing their rates of depressive episodes by getting rid of bipolar is not an unambiguous good by any stretch.
I think a lot of people who say they anti-value intelligence are coping (I am dumb therefore dumbness is a virtue) or being tribalistic (I hate nerdy people who wear glasses, they remind me of the outgroup). If they perceived their ingroup and themselves as being intelligent, I think they would change their tune.
That’s all well and fine, but you’re still doing that thing where you say “X is unambiguously good” and I’m like “But a bunch of people say that X is bad” and you’re like “ha, well, you see, their opinion is bullshit, betcha didn’t think of that” and I’m like, we’re talking past each other lol.
Anyway thanks for engaging, I appreciate the contention and I found it helpful even though you’re so RAWNG.
Anyway thanks for engaging, I appreciate the contention and I found it helpful even though you’re so RAWNG.
You are welcome. It has been fun inventing the PERFECT government policy and giving so many 100% CORRECT takes.
(Also remember, even the best possible policy cannot survive execution by an incompetent and untrustworthy government. My policies are only good if they are actually followed.)
The question is if it really is their opinion. People often say things they don’t believe as cope or as tribal signalling. If a non-trivial number of people who perceive themselves and their ingroup as intelligent, were to say they anti-value intelligence, that would update me.
Under my system we can ask people with below-average IQ whether they are happy to be below-average intelligence. If they are unhappy, outlaw gene editing for low intelligence. If they are happy, then either allow it, or decide to overrule them.
You want to be careful about overruling people. But intelligence is uniquely tricky because, if it is too low, people are not competent to decide what they want. Plus, people with low IQs have bad objective measures (e.g., significantly lower life expectancy).
IDK what to say… I guess I’m glad you’re not in charge? @JuliaHP I’ve updated a little bit that AGI aligned to one person would be bad in practice lol.
I am the Law, the Night Watchman State, the protector of innocents who cannot protect themselves. Your children cannot prevent you from editing their genes in a way that harms them, but the law can and should.
I do think this is in interesting and important consideration here; possibly the crux is quite simply trust in the state, but maybe that’s not a crux for me, not sure.
I don’t understand your position. My position is:
1. Higher intelligence and health and happiness-set-point are unambiguous good directions for the genome to go. Blindness and dwarfism are unambiguous bad directions for the genome to go.
2. Therefore, the statement “there are no unambiguous good directions for genomes to go” is false.
3. Since the statment is false, it is a bad argument.
Which step of this chain, specifically, do you disagree with? It sounds like you disagree with the first point.
But then you say the fact that “any reasonable cost-benefit analysis will find that intelligence and health and high happiness-set-point are good” is irrelevant to your argument.
So it seems your argument is “even if all reasonable cost-benefit analyses agree, things are still ambiguous”. Is that really your position?
Yeah. Well, we’re being vague about “reasonable”.
If by “reasonable” you mean “in practice, no one could, given a whole month of discussion, argue me into thinking otherwise”, then I think it’s still ambiguous even if all reasonable CBAs agree.
If by “reasonable” you mean “anyone of sound mind doing a CBA would come to this conclusion”, then no, it wouldn’t be ambiguous. But I also wouldn’t say that it should be protected. Basically by assumption, what we’re protecting is genomic liberty of parents; we’re discussing the case where a blind parent of sound mind, having been well-informed by their clinic of the consequences and perhaps given an enforced period of reflection, and hopefully having consulting with their peers, has decided to make their child blind.
If there’s no parent of sound mind making such a decision, then there’s no question of policy that we have to resolve. If there is, then I’m saying in most cases (with some recognized exceptions) it’s ambiguous.
By “reasonable” I meant “is consistent with near-universal human values”. For instance, humans near-universally value intelligence, happiness, and health. If an intervention decreases these, without corresponding benefits to other things humans value, then the intervention is unambiguously bad.
Instead of “the principle of genomic liberty”, I would prefer a “do no harm” principle. If you don’t want to do gene editing, that’s fine. If you do gene editing, you cannot make edits that, on average, your children will be unhappy about. Take the following cases:
1. Parents want to genetically modify their child from an IQ of 130 to an IQ of 80.
2. Parents want to genetically modify their child to be blind.[1]
3. Parents want to genetically modify their child to have persistent mild depression.[2]
People generally prefer to be intelligent and happy and healthy. Most people who have low intelligence or are blind or depressed wish things were otherwise. Therefore, such edits would be illegal.
(There may be some cases where “children are happy about the changes on net after the fact” is not restrictive enough. For instance, suppose a cult genetically engineers its children to be extremely religious and extremely obedient, and then tells them that disobedience will result in eternal torment in the afterlife. These children will be very happy that they were edited to be obedient.)
A concrete example of where I disagree with the “principle of genomic liberty”:
Down syndrome removes ~50 IQ points. The principle of genomic liberty would give a Down syndrome parent with an IQ of 90 the right to give a 140 IQ embryo Down syndrome, reducing the embryo’s IQ to 90 (this is allowed because 90 IQ is not sufficient to render someone non compos mentis).
Explicitly allowed by the principle of genomic liberty if one of the parents is blind.
Major depression is explicitly not protected by the principle of genomic liberty.
I’m still unclear how much we’re talking past each other. In this part, are you suggesting this as law enforced by the state? Note that this is NOT the same as
because you could have an intervention that does result in less happiness on average, but also has some other real benefit; but isn’t this doing some harm? Does it fall under “do no harm”?
And as always, the question here is, “Who decides what harm is?”.
Yes, I agree, and in fact specifically brought up (half of) this case in the exclusion for permanent silencing. Quoting:
You write:
In practice, my guess is that this would pose a quite significant risk of making the child non compos mentis, and therefore unable to sufficiently communicate their wellbeing; so it would be excluded from protection. But in theory, yes, we have a disagreement here. If the parent is compos mentis, then who the hell are you to say they can’t have a child like themselves?
How many people have you talked to about this topic? Lots of people I talk to value intelligence and would want to give their future kid intelligence; lots of people value it but say they wouldn’t want to influence; some people say they don’t value; and some even say they anti-value it (e.g. preferring their kid to be more normal).
I’m not sure how to communicate across a gap here… There’s a thing that it seems like you don’t understand, that you should understand, about law, the state, freedom, coercion, etc. There’s a big injustice in imposing your will on others, and you don’t seem to mind this. This principle of injustice is far from absolute; I endorse lots of impositions, e.g. no gouging out your child’s eyes. But you seem to just not mind about being like “ok, hm, which ways of living are good, ok, this is good and this is good, this is bad and this is bad, OK GUYS I FIGURED IT OUT, you may do X and you may not do Y, that is the law, I have spoken”. Maybe I’m missing you, but that’s what it sounds like. And I just don’t think this is how the law is supposed to work.
There is totally a genuine tough issue here, where the law should have some interest in protecting everyone, including young children from their parents, and yes to some extent even future children. But I feel our communication is dancing around this, where maybe you just don’t agree that the law should be very reluctant to impose?
The topic has drifted from my initial point, which is that there exist some unambiguous “good” directions for genomes to go. After reading your proposed policy it looks like you concede this point, since you are happy to ban gene editing that causes severe mental disability, major depression, etc. Therefore, you seem to agree that going from “chronic severe depression” to “typical happiness set point” is an unambiguous good change. (Correct me if I am wrong here.)
I haven’t thought through the policy questions at any great length. Actually, I made up all my policy positions on the fly while talking to you. And I haven’t thought about the coalition-building aspect at all. But my current position is that, if we had a highly competent government that could be trusted to reasonably interpret the rules, I would want them to enforce the following:
Don’t allow unambiguous net harm. (Reasonable tradeoffs are fine. Err on the side of permissiveness.)
The best experts on whether “unambiguous net harm” was done are the people who were edited.
Although in rare cases we may have to overrule them, such as the cult example above. This is especially the case if cult members have objectively bad outcomes (e.g., high rates of depression and suicide) despite claiming to be happy.
If we have high confidence that the edited people will have regrets (e.g. based on observing existing people with the condition) we can prohibit the edits without running the experiment. Allowing “unambiguous net harm” edits to be performed for a generation has a high cost.
In some cases I am more permissive than you are. I don’t think we have enough evidence to determine that removing the emotion of fear is “unambiguous net harm”, but it would be prohibited under your “no removing a core aspect of humanity” exception. (Perhaps a generation from now we would have enough data to justify banning it under my rules. But I suspect it has sufficient upside to remain legal.)
Brief reactions to things you said:
I think a lot of people who say they anti-value intelligence are coping (I am dumb therefore dumbness is a virtue) or being tribalistic (I hate nerdy people who wear glasses, they remind me of the outgroup). If they perceived their ingroup and themselves as being intelligent, I think they would change their tune.
Also, intelligent people strongly value intelligence. And since they are smarter, we should weight their opinions more heavily :P
In this case, we are preventing the parents from imposing their will on the future child.
I am the Law, the Night Watchman State, the protector of innocents who cannot protect themselves. Your children cannot prevent you from editing their genes in a way that harms them, but the law can and should.
Yeah, if this is the sort of thing you’re imagining, we’re just making a big different background assumption here.
Yeah, on a methodological level, you’re trying to do a
naivestraightforward utilitarian consequentialist thing, maybe? And I’m like, this isn’t how justice and autonomy and the law work, it’s not how politics and public policy works, it’s not how society and cosmopolitanism work. (In this particular case, my justification about human dignity maybe doesn’t immediately make sense to you, but I think that not understanding the justification is a failure on your part—the justification might ultimately be wrong, I’m not at all confident, but it’s a real justification. See for example “What’s really wrong with genetice nhancement: a second look at our posthuman future”.)No, this is going too far. The exception there would be for a medium / high likelihood of really bad depression, like “I can’t bring myself to work on anything for any sustained time, even stuff that’s purely for fun, I think about killing myself all the time for years and years, I am suffering greatly every day, I take no joy in anything and have no hope”, that kind of thing. Going from “once in a while gets pretty down for a few weeks, has to take a bit of time off work and be sad in bed” is probably fine, and probably has good aspects, even if it is net-bad / net-dispreferable for most people and is somewhat below typical happiness set-point. Mild high-functioning bipolar might be viewed by some people with that condition as important to who they are, and a source of strength and creativity. Or something, I don’t know. Decreasing their rates of depressive episodes by getting rid of bipolar is not an unambiguous good by any stretch.
That’s all well and fine, but you’re still doing that thing where you say “X is unambiguously good” and I’m like “But a bunch of people say that X is bad” and you’re like “ha, well, you see, their opinion is bullshit, betcha didn’t think of that” and I’m like, we’re talking past each other lol.
Anyway thanks for engaging, I appreciate the contention and I found it helpful even though you’re so RAWNG.
You are welcome. It has been fun inventing the PERFECT government policy and giving so many 100% CORRECT takes.
(Also remember, even the best possible policy cannot survive execution by an incompetent and untrustworthy government. My policies are only good if they are actually followed.)
The question is if it really is their opinion. People often say things they don’t believe as cope or as tribal signalling. If a non-trivial number of people who perceive themselves and their ingroup as intelligent, were to say they anti-value intelligence, that would update me.
Under my system we can ask people with below-average IQ whether they are happy to be below-average intelligence. If they are unhappy, outlaw gene editing for low intelligence. If they are happy, then either allow it, or decide to overrule them.
You want to be careful about overruling people. But intelligence is uniquely tricky because, if it is too low, people are not competent to decide what they want. Plus, people with low IQs have bad objective measures (e.g., significantly lower life expectancy).
IDK what to say… I guess I’m glad you’re not in charge? @JuliaHP I’ve updated a little bit that AGI aligned to one person would be bad in practice lol.
Haha, well, at least I changed your mind about something.
If we had ASI we could just let the children choose their own genes once they grow up. Problem solved.
With or without ASI, certainly morphological autonomy is more or less a universal good.
I do think this is in interesting and important consideration here; possibly the crux is quite simply trust in the state, but maybe that’s not a crux for me, not sure.