Yeah, something like the alignment forum would actually be pretty good, and while LW/AF has a lot of problems, lots of it is mostly attributable to the people and culture around here, rather than their merits.
LW/AF tools would be extremely helpful for a lot of scientists, once you divorce the culture from it.
The thing I’ll say on the orthogonality thesis is that I think it’s actually fairly obvious, but only because it makes extremely weak claims, in that it’s logically possible for AI to be misaligned, and the critical mistake is assuming that possibility translates into non-negligible likelihood.
It’s useful for history purposes, but is not helpful at all for alignment, as it fails to answer essential questions.