That’s also a good point. I suppose I’m overextending my experience with weaker AI-ish stuff, where they tend to reproduce whatever is in their training set — regardless of whether or not it’s truly relevant.
I still think that LW would be a net disadvantage, though. If you really wanted to chuck something into an AGI and say “do this,” my current choice would be the Culture books. Maybe not optimal, but at least there’s a lot of them!
On a vaguely related side note: is the presence of LessWrong (and similar sites) in AI training corpora detrimental? This site is full of speculation on how a hypothetical AGI would behave, and most of it is not behavior we would want any future systems to imitate. Deliberately omitting depictions of malicious AI behavior in training datasets may be of marginal benefit. Even if simulator-style AIs are not explicitly instructed to simulate a “helpful AI assistant,” they may still identify as one.