I’m saying that (waves hands vigorously) 99% of people are beneficent or “neutral” (like, maybe not helpful / generous / proactively kind, but not actively harmful, even given the choice) in both intention and in action. That type of neutral already counts as in a totally different league of being aligned compared to AGI.
one human group is vastly unaligned to another human group
Ok, yes, conflict between large groups is something to be worried about, though I don’t much see the connection with germline engineering. I thought we were talking about, like, some liberal/techie/weirdo people have some really really smart kids, and then those kids are somehow a threat to the future of humanity that’s comparable to a fast unbounded recursive self-improvement AGI foom.
I’m saying that (waves hands vigorously) 99% of people are beneficent or “neutral” (like, maybe not helpful / generous / proactively kind, but not actively harmful, even given the choice) in both intention and in action. That type of neutral already counts as in a totally different league of being aligned compared to AGI.
I think this is ultimately the crux, at least relative to my values, I’d expect at least 20% in America to support active efforts to harm me or my allies/people I’m altruistic to, and do so fairly gleefully (an underrated example here is voting for people that will bring mass harm to groups they hate, and hope that certain groups go extinct).
Ok, yes, conflict between large groups is something to be worried about, though I don’t much see the connection with germline engineering. I thought we were talking about, like, some liberal/techie/weirdo people have some really really smart kids, and then those kids are somehow a threat to the future of humanity that’s comparable to a fast unbounded recursive self-improvement AGI foom.
Okay, the connection was to point out that lots of humans are not in fact aligned with each other, and I don’t particularly think superbabies are a threat to the future of humanity that is comparable to AGI, so my point was more so that the alignment problem is not naturally solved in human-to human interactions.
lots of humans are not in fact aligned with each other,
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Right, ok, agreed.
the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper
There’s an important point to be made here that Hitler was not a genius, and in general the most evil people in history don’t correlate at all to being the smartest people in history. In fact, the smartest people in history generally seemed more likely to contribute positively to the development of humanity.
I would posit it’s easier to make a high IQ child good for society, with positive nurturing.
The alignment problem perhaps is thus less difficult with “super babies”, because they can more easily see the irrationality in poor ethics and think better from first principles, being grounded in the natural alignment that comes from the fact we are all humans with similar sentience (as opposed to AI which might as well be a different species altogether).
Given that Hitler’s actions resulted in his death and the destruction of Germany, a much higher childhood IQ might even have blunted his evil.
Also don’t buy the idea that very smart humans automatically assume control. I suspect Kamala, Biden, Hillary, etc all had a higher IQ than Donald Trump, but he became the most powerful person on the planet.
Does the size of this effect, according to you, depend on parameters of the technology? E.g. if it clearly has a ceiling, such that it’s just not feasible to make humans who are in a meaningful sense 10x more capable than the most capable non-germline-engineered human? E.g. if the technology is widespread, so that any person / group / state has access if they want it?
I’m saying that (waves hands vigorously) 99% of people are beneficent or “neutral” (like, maybe not helpful / generous / proactively kind, but not actively harmful, even given the choice) in both intention and in action. That type of neutral already counts as in a totally different league of being aligned compared to AGI.
Ok, yes, conflict between large groups is something to be worried about, though I don’t much see the connection with germline engineering. I thought we were talking about, like, some liberal/techie/weirdo people have some really really smart kids, and then those kids are somehow a threat to the future of humanity that’s comparable to a fast unbounded recursive self-improvement AGI foom.
I think this is ultimately the crux, at least relative to my values, I’d expect at least 20% in America to support active efforts to harm me or my allies/people I’m altruistic to, and do so fairly gleefully (an underrated example here is voting for people that will bring mass harm to groups they hate, and hope that certain groups go extinct).
Okay, the connection was to point out that lots of humans are not in fact aligned with each other, and I don’t particularly think superbabies are a threat to the future of humanity that is comparable to AGI, so my point was more so that the alignment problem is not naturally solved in human-to human interactions.
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
Right, ok, agreed.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
There’s an important point to be made here that Hitler was not a genius, and in general the most evil people in history don’t correlate at all to being the smartest people in history. In fact, the smartest people in history generally seemed more likely to contribute positively to the development of humanity.
I would posit it’s easier to make a high IQ child good for society, with positive nurturing.
The alignment problem perhaps is thus less difficult with “super babies”, because they can more easily see the irrationality in poor ethics and think better from first principles, being grounded in the natural alignment that comes from the fact we are all humans with similar sentience (as opposed to AI which might as well be a different species altogether).
Given that Hitler’s actions resulted in his death and the destruction of Germany, a much higher childhood IQ might even have blunted his evil.
Also don’t buy the idea that very smart humans automatically assume control. I suspect Kamala, Biden, Hillary, etc all had a higher IQ than Donald Trump, but he became the most powerful person on the planet.
My estimate is 97% not sociopaths, but only about 60% inclined to avoid teaming up with sociopaths.
Germline engineering likely destroys most of what we’re trying to save, via group conflict effects. There’s a reason it’s taboo.
Does the size of this effect, according to you, depend on parameters of the technology? E.g. if it clearly has a ceiling, such that it’s just not feasible to make humans who are in a meaningful sense 10x more capable than the most capable non-germline-engineered human? E.g. if the technology is widespread, so that any person / group / state has access if they want it?