On your question of AI posts: it is a complex topic, and understanding all of the nuances is a full-time job. The recent post Shallow review of live agendas in alignment & safety is an outstanding overview of all of the research currently going on. Opinions on LessWrong are even more varied, but that’s a good starting point for understanding what’s going on in the field and therefore on LW right now.
On your question of AI posts: it is a complex topic, and understanding all of the nuances is a full-time job. The recent post Shallow review of live agendas in alignment & safety is an outstanding overview of all of the research currently going on. Opinions on LessWrong are even more varied, but that’s a good starting point for understanding what’s going on in the field and therefore on LW right now.