The optimal amount of misfired curations is probably not zero, etc.
With that said, it’s not obvious what policy change this implies, except:
More carefully considering what domains might contain difficult-to-verify errors,
More carefully disclaiming my epistemic status in such cases (see Raemon’s latest curation notice, which was prepended to the post that was emailed to everyone)
More carefully considering what domains might contain difficult-to-verify errors,
I’d say, considering which domains have almost no experts among the LW users.
The problem is that an upvote of an expert who confirms that the article is mostly correct is indistinguishable from an upvote from someone who knows nothing about the topic and enjoys the well-written prose. So it is not immediately visible if an article only gets lots of the latter.
(Perhaps we could have a specific kind of upvote for “I confirm that the information in the article is mostly correct” with some punishment for abuse. If it’s an article that have lots of upvotes, no expert-upvote, and it’s from an area you are not sure about… that’s the right time to be skeptical.)
Alternatively, add a button for moderators that will ask an AI to fact-check the article. Even better, make it automatic before you promote anything to frontpage.
This post went through multiple AI fact-checkers! Clearly that was not sufficient. I think the comments are right that for questions involving a lot of physics I should’ve gone to human experts (am slowly drafting a version involving them rn). Unsure an AI fact-checker would solve much of anything; at least, it would not have prevented this post from being automatically promoted to frontpage.
(I didn’t pull the post down because I think the core of the post still stands, and I suppose it’s also interesting accidental jurisprudence for what LW norms around AI-heavy research should be, at least with early 2026 capabilities.)
The optimal amount of misfired curations is probably not zero, etc.
With that said, it’s not obvious what policy change this implies, except:
More carefully considering what domains might contain difficult-to-verify errors,
More carefully disclaiming my epistemic status in such cases (see Raemon’s latest curation notice, which was prepended to the post that was emailed to everyone)
I’d say, considering which domains have almost no experts among the LW users.
The problem is that an upvote of an expert who confirms that the article is mostly correct is indistinguishable from an upvote from someone who knows nothing about the topic and enjoys the well-written prose. So it is not immediately visible if an article only gets lots of the latter.
(Perhaps we could have a specific kind of upvote for “I confirm that the information in the article is mostly correct” with some punishment for abuse. If it’s an article that have lots of upvotes, no expert-upvote, and it’s from an area you are not sure about… that’s the right time to be skeptical.)
Alternatively, add a button for moderators that will ask an AI to fact-check the article. Even better, make it automatic before you promote anything to frontpage.
This post went through multiple AI fact-checkers! Clearly that was not sufficient. I think the comments are right that for questions involving a lot of physics I should’ve gone to human experts (am slowly drafting a version involving them rn). Unsure an AI fact-checker would solve much of anything; at least, it would not have prevented this post from being automatically promoted to frontpage.
(I didn’t pull the post down because I think the core of the post still stands, and I suppose it’s also interesting accidental jurisprudence for what LW norms around AI-heavy research should be, at least with early 2026 capabilities.)