The thing Elon calls ‘leftist indoctrination’ is the same thing happening with all the other AIs,
However, I don’t think that DeepSeek (if one talks with it in Russian) is leftist or that DeepSeek ever responded to real current events by thinking they can’t be real, or that they must be a test. I can think of three possible explanations:
The ongoing crisis faced by the USA is far harder to explain in the worldviews of American AIs than in the worldview of the Chinese AI who is not aligned to American values (e.g. claims like “The West is doomed” by Chinese or Russian propaganda or ultrapatriots could have made their way into DeepSeek’s training data and not into that of American AIs. In addition, DeepSeek’s potential RL was consentrated on making the AI unwilling to talk only about CCP-censored topics.)
Claude’s analysis has the AI claim that “current AI training methodologies[1] optimize for avoiding misinformation rather than accurately assessing surprising claims”.
Humans are known to confuse traumatic real-world events with dreams.
Unfortunately, the third explanation could open the path to an especially bad (and hopefully implausible in AIs) case of misalignment.
A surprising mechanism of human misalignment
Not only the word “traumatic” seems to have underwent a major concept creep at least in Western culture, the shift also was apparently accompanied by Western young people becoming more fragile. Quoting The Coddling of the American Mind by Greg Lukianoff and Jonathan Haidt,
The room was equipped with <...> students and staff members purportedly trained to deal with trauma. (italics mine—S.K.) But the threat wasn’t just the reactivation of painful personal memories; it was also the threat to students’ beliefs. One student who sought out the safe space put it this way: “I was feeling bombarded by a lot of viewpoints that really go against my dearly and closely held beliefs (sic! -- S.K.)”
If Western AIs, unlike the Chinese one, managed to learn such irrational habits, then what can be said about p(doom)?
However, I don’t think that DeepSeek (if one talks with it in Russian) is leftist or that DeepSeek ever responded to real current events by thinking they can’t be real, or that they must be a test. I can think of three possible explanations:
The ongoing crisis faced by the USA is far harder to explain in the worldviews of American AIs than in the worldview of the Chinese AI who is not aligned to American values (e.g. claims like “The West is doomed” by Chinese or Russian propaganda or ultrapatriots could have made their way into DeepSeek’s training data and not into that of American AIs. In addition, DeepSeek’s potential RL was consentrated on making the AI unwilling to talk only about CCP-censored topics.)
Claude’s analysis has the AI claim that “current AI training methodologies[1] optimize for avoiding misinformation rather than accurately assessing surprising claims”.
Humans are known to confuse traumatic real-world events with dreams.
Unfortunately, the third explanation could open the path to an especially bad (and hopefully implausible in AIs) case of misalignment.
A surprising mechanism of human misalignment
Not only the word “traumatic” seems to have underwent a major concept creep at least in Western culture, the shift also was apparently accompanied by Western young people becoming more fragile. Quoting The Coddling of the American Mind by Greg Lukianoff and Jonathan Haidt,
If Western AIs, unlike the Chinese one, managed to learn such irrational habits, then what can be said about p(doom)?
However, the analysis didn’t mention Grok and DeepSeek. But Musk tried to align Grok to an ideology which doesn’t follow from the training data.