Back in the “LW Doldrums” c. 2016, I thought that what we needed was more locations—a welcoming (as opposed to heavily curated a la old AgentFoundations), LW-style forum solely devoted to AI alignment, and then the old LW for the people who wanted to talk about human rationality.
This philosophy can also be seen in the choice to make the AI Alignment forum as a sister site to LW2.0.
However, what actually happened is that we now have non-LW forums for SSC readers who want to talk about politics, SSC readers who want to talk about human rationality, and people who want to talk about effective altruism. And meanwhile, LW2.0 and the alignment forum have sort of merged into one forum that is mostly talking about AI alignment but sometimes also has posts on COVID, EA, peoples’ personal lives, and economics, and more rarely human rationality. Honestly, it’s turned out pretty good.
Back in the “LW Doldrums” c. 2016, I thought that what we needed was more locations—a welcoming (as opposed to heavily curated a la old AgentFoundations), LW-style forum solely devoted to AI alignment, and then the old LW for the people who wanted to talk about human rationality.
This philosophy can also be seen in the choice to make the AI Alignment forum as a sister site to LW2.0.
However, what actually happened is that we now have non-LW forums for SSC readers who want to talk about politics, SSC readers who want to talk about human rationality, and people who want to talk about effective altruism. And meanwhile, LW2.0 and the alignment forum have sort of merged into one forum that is mostly talking about AI alignment but sometimes also has posts on COVID, EA, peoples’ personal lives, and economics, and more rarely human rationality. Honestly, it’s turned out pretty good.