I agree with this. The threat model is a little bit too narrow in this regard, because a lab could simply tell a sufficiently capable AI to hijack people’s minds / culture rather than wait for mind hijacking to arise as an instrumental sub-goal of something weird.
I agree with this. The threat model is a little bit too narrow in this regard, because a lab could simply tell a sufficiently capable AI to hijack people’s minds / culture rather than wait for mind hijacking to arise as an instrumental sub-goal of something weird.