lots of humans are not in fact aligned with each other,
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Right, ok, agreed.
the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper
There’s an important point to be made here that Hitler was not a genius, and in general the most evil people in history don’t correlate at all to being the smartest people in history. In fact, the smartest people in history generally seemed more likely to contribute positively to the development of humanity.
I would posit it’s easier to make a high IQ child good for society, with positive nurturing.
The alignment problem perhaps is thus less difficult with “super babies”, because they can more easily see the irrationality in poor ethics and think better from first principles, being grounded in the natural alignment that comes from the fact we are all humans with similar sentience (as opposed to AI which might as well be a different species altogether).
Given that Hitler’s actions resulted in his death and the destruction of Germany, a much higher childhood IQ might even have blunted his evil.
Also don’t buy the idea that very smart humans automatically assume control. I suspect Kamala, Biden, Hillary, etc all had a higher IQ than Donald Trump, but he became the most powerful person on the planet.
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
Right, ok, agreed.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
There’s an important point to be made here that Hitler was not a genius, and in general the most evil people in history don’t correlate at all to being the smartest people in history. In fact, the smartest people in history generally seemed more likely to contribute positively to the development of humanity.
I would posit it’s easier to make a high IQ child good for society, with positive nurturing.
The alignment problem perhaps is thus less difficult with “super babies”, because they can more easily see the irrationality in poor ethics and think better from first principles, being grounded in the natural alignment that comes from the fact we are all humans with similar sentience (as opposed to AI which might as well be a different species altogether).
Given that Hitler’s actions resulted in his death and the destruction of Germany, a much higher childhood IQ might even have blunted his evil.
Also don’t buy the idea that very smart humans automatically assume control. I suspect Kamala, Biden, Hillary, etc all had a higher IQ than Donald Trump, but he became the most powerful person on the planet.