It might be nice to move all AI content to the Alignment Forum. I’m not sure the effect you’re discussing is real, but if it is, it might be because LW has become a de facto academic journal for AI safety research, so many people are posting without significant engagement with the LW canon or any interested in rationality.
The current rules around who can post on the Alignment Forum seem a bit antiquated. I’ve been working on alignment research for over 2 years and I don’t know off the top of my head how to get permission to post there. And I expect the relevant people to see stuff if it’s on LW anyway.
When I’ve brought this up, a few people asked why we don’t just put all the AI content on the Alignment Forum. This is a fairly obvious question, but:
a) It’d be a pretty big departure from what the Alignment Forum is currently used for.
b) I don’t think it really changes the fundamental issue of “AI is what lots of people are currently thinking about on LessWrong.”
The Alignment Forum’s current job is not to be a comprehensive list of all AI content, it’s meant to especially good content with a high signal/noise ratio. All Alignment Forum posts are also LessWrong posts, and LessWrong is meant to be the place where most discussion happens on them. The AF versions of posts are primarily meant to be a thing you can link to professionally without having to explain the context of a lot of weird, not-obviously-related topics that show up on LessWrong.
We created the Alignment Forum ~5 years ago, and it’s plausible the world needs a new tool now. BUT, it still feels like a weird solution to try and move the AI discussion off of LessWrong. AI is one of the central topics that motivate a lot of other LessWrong interests. LessWrong is about the art of rationality, but one of the important lenses here is “how would you build a mind that was optimally rational, from scratch?”.
It might be nice to move all AI content to the Alignment Forum. I’m not sure the effect you’re discussing is real, but if it is, it might be because LW has become a de facto academic journal for AI safety research, so many people are posting without significant engagement with the LW canon or any interested in rationality.
The current rules around who can post on the Alignment Forum seem a bit antiquated. I’ve been working on alignment research for over 2 years and I don’t know off the top of my head how to get permission to post there. And I expect the relevant people to see stuff if it’s on LW anyway.
https://www.lesswrong.com/posts/P32AuYu9MqM2ejKKY/so-geez-there-s-a-lot-of-ai-content-these-days