Are there examples of posts with factual errors you think would be caught by LLMs?
One thing you could do is fact check a few likely posts and see if it’s adding substantial value. That would be more persuasive than abstract arguments.
Thanks for the suggestion, I added the “Edit 1” section to the post to showcase a small study on 3 posts known to contain factual mistakes. The LLM is able to spot and correct the mistake in 2 of the 3 cases, and provides valuable (though verbose) context. Overall this seems promising to me.
Are there examples of posts with factual errors you think would be caught by LLMs?
One thing you could do is fact check a few likely posts and see if it’s adding substantial value. That would be more persuasive than abstract arguments.
Thanks for the suggestion, I added the “Edit 1” section to the post to showcase a small study on 3 posts known to contain factual mistakes. The LLM is able to spot and correct the mistake in 2 of the 3 cases, and provides valuable (though verbose) context. Overall this seems promising to me.