I’d say this is true, in that human misalignments don’t threaten the human species, or even billions of people, whereas AI does, so in that regard I admit human misalignment is less impactful than AGI misalignment.
Right, ok, agreed.
the plasticity/sensitivity of values goes way down when you are an adult, and changing values is much, much harder.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.
Right, ok, agreed.
I agree qualitatively, but I do mean to say he’s in charge of Germany, but somehow has hours of free time every day to spend with the whisperer. If it’s in childhood I would guess you could do it with a lot less contact, though not sure. TBC, the whisperer here would be considered a world-class, like, therapist or coach or something, so I’m not saying it’s easy. My point is that I have a fair amount of trust in “human decision theory” working out pretty well in most cases in the long run with enough wisdom.
I even think something like this is worth trying with present-day AGI researchers (what I call “confrontation-worthy empathy”), though that is hard mode because you have so much less access.