If you apply some of these norms, then imo there are questionable implications, i.e. it seems weird to say that one should have read the sequences in order to post about mechanistic interpretability on the Alignment Forum.
The AI Alignment Forum was never intended as the central place for all AI Alignment discussion. It was founded at a time when basically everyone involved in AI Alignment had read the sequences, and the goal was to just have any public place for any alignment discussion.
Now that the field is much bigger, I actually kind of wish there was another forum where AI Alignment people could go to, so we would have more freedom in shaping a culture and a set of background assumptions that allow people to make further strides and create a stronger environment of trust.
I personally am much more interested in reading about mechanistic interpretability from people who have read the sequences. That one in-particular is actually one of the ones where a good understanding of probability theory, causality and philosophy of science seems particularly important (again, it’s not that important that someone has acquired that understanding via the sequences instead of some other means, but it does actually really benefit from a bunch of skills that are not standard in the ML or general scientific community).
I expect we will make some changes here in the coming months, maybe by renaming the forum or starting off a broader forum that can stand more on its own, or maybe just shutting down the AI Alignment Forum completely and letting other people fill that niche.
The AI Alignment Forum was never intended as the central place for all AI Alignment discussion. It was founded at a time when basically everyone involved in AI Alignment had read the sequences, and the goal was to just have any public place for any alignment discussion.
Now that the field is much bigger, I actually kind of wish there was another forum where AI Alignment people could go to, so we would have more freedom in shaping a culture and a set of background assumptions that allow people to make further strides and create a stronger environment of trust.
I personally am much more interested in reading about mechanistic interpretability from people who have read the sequences. That one in-particular is actually one of the ones where a good understanding of probability theory, causality and philosophy of science seems particularly important (again, it’s not that important that someone has acquired that understanding via the sequences instead of some other means, but it does actually really benefit from a bunch of skills that are not standard in the ML or general scientific community).
I expect we will make some changes here in the coming months, maybe by renaming the forum or starting off a broader forum that can stand more on its own, or maybe just shutting down the AI Alignment Forum completely and letting other people fill that niche.