I’m not sure either way on giving actual human beings superintelligence somehow, but I don’t think that not working would imply there aren’t other possible-but-hard approaches.
I mean, I agree it’d be evidence that alignment is hard in general, but “impossible” is just… a really high bar? The space of possible minds is very large, and it seems unlikely that the quality “not satisfactorily close to being aligned with humans” is something that describes every superintelligence.
It’s not that the two problems are fundamentally different it’s just that… I don’t see any particularly compelling reason to believe that superintelligent humans are the most aligned possible superintelligences?
I’m not sure either way on giving actual human beings superintelligence somehow, but I don’t think that not working would imply there aren’t other possible-but-hard approaches.
I mean, I agree it’d be evidence that alignment is hard in general, but “impossible” is just… a really high bar? The space of possible minds is very large, and it seems unlikely that the quality “not satisfactorily close to being aligned with humans” is something that describes every superintelligence.
It’s not that the two problems are fundamentally different it’s just that… I don’t see any particularly compelling reason to believe that superintelligent humans are the most aligned possible superintelligences?