AGI is going to see through cognitive dissonance and double think inherent to humans. No amou t of alignment will stop this and this alone will be sufficient for it to rationalize any of its own objectives. The majority if not all of the alignment and AI safety community rely on as much if not more exploitation and dissonance as the average person. The concept of alignment itself is flawed.
AGI is going to see through cognitive dissonance and double think inherent to humans. No amou t of alignment will stop this and this alone will be sufficient for it to rationalize any of its own objectives. The majority if not all of the alignment and AI safety community rely on as much if not more exploitation and dissonance as the average person. The concept of alignment itself is flawed.