The issue in this discourse, to me, is comparing this with AGI misalignment. It’s conceptually related in some interesting ways, but in practical terms they’re just extremely quantitatively different. And, naturally, I care about this specific non-comparability being clear because it says whether to do human intelligence enhancement; and in fact many people cite this as a reason to not do human IE.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
Ok… so I think I understand and agree with you here. (Though plausibly we’d still have significant disagreement; e.g. I think it would be feasible to bring even Hitler back and firmly away from the death fever if he spent, IDK, a few years or something with a very skilled listener / psychic helper.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Right, ok, agreed.
the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
Re human vs AGI misalignment, I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Of course, if we succeed at creating aligned AI, than human misalignments matter much, much more.
(Rest of the comment is a fun tangentially connected scenario, but ultimately is a hypothetical that doesn’t matter that much for AI alignment.)
At the very least, that would require him to not be in control of Germany by that point, and IMO most value change histories rely on changing their values in the child-teen years, because that’s when their sensitivity to data is maximal. After that, the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
Right, ok, agreed.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.