When I’ve brought this up, a few people asked why we don’t just put all the AI content on the Alignment Forum. This is a fairly obvious question, but:
a) It’d be a pretty big departure from what the Alignment Forum is currently used for.
b) I don’t think it really changes the fundamental issue of “AI is what lots of people are currently thinking about on LessWrong.”
The Alignment Forum’s current job is not to be a comprehensive list of all AI content, it’s meant to especially good content with a high signal/noise ratio. All Alignment Forum posts are also LessWrong posts, and LessWrong is meant to be the place where most discussion happens on them. The AF versions of posts are primarily meant to be a thing you can link to professionally without having to explain the context of a lot of weird, not-obviously-related topics that show up on LessWrong.
We created the Alignment Forum ~5 years ago, and it’s plausible the world needs a new tool now. BUT, it still feels like a weird solution to try and move the AI discussion off of LessWrong. AI is one of the central topics that motivate a lot of other LessWrong interests. LessWrong is about the art of rationality, but one of the important lenses here is “how would you build a mind that was optimally rational, from scratch?”.
https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days