lesswrong people upvoting stuff because they [...] wanna create “common knowledge” about how doomy we ought to be.
IMO a big failure of the ~2012–2020-era AI safety field was a failure to create common knowledge that AI risk is really bad actually. I think this is a common view of LWers, and upvoting doomy posts is a way to rectify this.
But I do think the problem is basically solved on LW at this point. LWers are mostly on the same page. The problem still exits elsewhere, and I think there’s value in writing about evidence of misalignment / evidence that alignment is hard / etc. as a useful thing to point to from outside LessWrong, but upvoting those posts to the moon is less useful.
IMO a big failure of the ~2012–2020-era AI safety field was a failure to create common knowledge that AI risk is really bad actually. I think this is a common view of LWers, and upvoting doomy posts is a way to rectify this.
But I do think the problem is basically solved on LW at this point. LWers are mostly on the same page. The problem still exits elsewhere, and I think there’s value in writing about evidence of misalignment / evidence that alignment is hard / etc. as a useful thing to point to from outside LessWrong, but upvoting those posts to the moon is less useful.