I try pretty hard (and I think most of the team does) to at least moderate AI x-risk criticism more leniently. But of course, it’s tricky to know if you’re doing a good job. Am I undercorrecting or overcorrecting for my bias? If you ever notice some examples that seem like moderation bias, please lmk!
Of course, moderation is only a small part of what drives the site culture/reward dynamics
Yeah to be clear, although I would act differently, I do think the LW team both tries hard to do well here, and tries more effectually than most other teams would.
It’s just that once LW has become much more of a Schelling point for doom more than for rationality, there’s a pretty steep natural slope.
I try pretty hard (and I think most of the team does) to at least moderate AI x-risk criticism more leniently. But of course, it’s tricky to know if you’re doing a good job. Am I undercorrecting or overcorrecting for my bias? If you ever notice some examples that seem like moderation bias, please lmk!
Of course, moderation is only a small part of what drives the site culture/reward dynamics
Yeah to be clear, although I would act differently, I do think the LW team both tries hard to do well here, and tries more effectually than most other teams would.
It’s just that once LW has become much more of a Schelling point for doom more than for rationality, there’s a pretty steep natural slope.