Reposting by comment from your post on omnilibrium:
You failed to address, or even acknowledge the question, of who fact-checks the fact-checkers. For example, you mention PolitiFact, it has a acquired a reputation for downplaying some politicians lies, and in some cases even outright classifying true statements as lies by others.
In general, this proposal is just silly. After all the media is supposed to fack-check politicians but it is rather notorious for its own biases and even occasional lies. Why would we expect self-proclaimed fact-checkers to be any better?
This a is a fully general counterargument to everything from consumer reports to examine.com to the organic movement. Basically anything that attempts to help people be better informed can be accused of lost purposes.
i think you could steelman this as “You should only use fact checkers who don’t have significant adverse incentives”. Consumer Reports and Examine.com fit the bill, politifact may not.
I’d prefer the framing that it’s not a fact-checker, but rather an inconsistency-detector. Rather than “this bot detected the claim that vaccines cause autism, which is wrong”, it’d say “this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world’s most prominent medical journals”. Or in 1930, it might have reported “this bot detected the claim that continents drift, which is in conflict with the scientific consensus of leading geology journals”.
it’d say “this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world’s most prominent medical journals”.
In that case, I don’t see the point. After all, anti-vaxxers don’t deny that there are prominent medical professionals who don’t agree with their position. They, however, suspect that said professionals are doing so due to a combination of biases and money from the vaccine industry.
But not all people in the audience would react like that to michaelkeenan’s example warning. Some people would presumably value being informed of authoritative sources contradicting a claim that vaccines cause autism.
(And if your objection went through for fact checking framed as contradiction reporting, why wouldn’t it go through for fact checking framed as fact checking? My mental model of an anti-vaxxer has them responding as negatively to being baldly contradicted as to being informed, “The Lancet says this is wrong”.)
The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.
If people on LW are using Bayesian updating properly and check comments for refutations (which some commenters love to do), then this shouldn’t be as large a problem.
Reposting by comment from your post on omnilibrium:
You failed to address, or even acknowledge the question, of who fact-checks the fact-checkers. For example, you mention PolitiFact, it has a acquired a reputation for downplaying some politicians lies, and in some cases even outright classifying true statements as lies by others.
In general, this proposal is just silly. After all the media is supposed to fack-check politicians but it is rather notorious for its own biases and even occasional lies. Why would we expect self-proclaimed fact-checkers to be any better?
Also, judging by the upvotes this post has recieved and the rest of the comments, it appears even most LWers will accept someone’s claim to be stating facts without question.
This a is a fully general counterargument to everything from consumer reports to examine.com to the organic movement. Basically anything that attempts to help people be better informed can be accused of lost purposes.
i think you could steelman this as “You should only use fact checkers who don’t have significant adverse incentives”. Consumer Reports and Examine.com fit the bill, politifact may not.
That’s fair.
Which cases do you mean?
They operate under a bit different incentives. PolitiFact gains less by writing sensational stories than classic news outlets.
That’s not self-evident to me. They still want eyeballs and clicks.
I basically remembered FactCheck.org funding model and thought PolitiFact uses the same.
PolitiFact does make money via adverstising. At the same time I expect it’s reputation needs to be a bit different.
I’d prefer the framing that it’s not a fact-checker, but rather an inconsistency-detector. Rather than “this bot detected the claim that vaccines cause autism, which is wrong”, it’d say “this bot detected the claim that vaccines cause autism, which is in conflict with the view held by The Lancet, one of the world’s most prominent medical journals”. Or in 1930, it might have reported “this bot detected the claim that continents drift, which is in conflict with the scientific consensus of leading geology journals”.
In that case, I don’t see the point. After all, anti-vaxxers don’t deny that there are prominent medical professionals who don’t agree with their position. They, however, suspect that said professionals are doing so due to a combination of biases and money from the vaccine industry.
But not all people in the audience would react like that to michaelkeenan’s example warning. Some people would presumably value being informed of authoritative sources contradicting a claim that vaccines cause autism.
(And if your objection went through for fact checking framed as contradiction reporting, why wouldn’t it go through for fact checking framed as fact checking? My mental model of an anti-vaxxer has them responding as negatively to being baldly contradicted as to being informed, “The Lancet says this is wrong”.)
The anti-vax thing is one of the hardest cases. More often, people are just accidentally wrong. Like this exchange at Hacker News, which had checkable claims like:
“The UK is a much more violent society than the US, statistically”
“There are dozens of U.S. cities with higher per capita murder rates than London or any other city in the UK”
“Murder rates are higher in the US, but murder is a small fraction of violent crime. All other violent crime is much more common in the UK than in the US.”
There would also be a useful effect for observers. That Hacker News discussion contained no citations, so no-one was convinced and I doubt any observers knew what to think. But if a fact-checker bot was noting which claims were true and which weren’t, then observers would know which claims were correct (or rather, which claims were consistent with official statistics).
If these fact-checkers were extremely common, it could still help anti-vaccine people. If you’re against vaccines, but you’ve seen the fact-checker bot be correct 99 other times, then you might give credence to its claims.
That’s subject to Goodhart’s Law. If you start judging bots by their behavior in other cases, people will take advantage of your judging process by specifically designing bots to do poor fact checking on just a couple of issues, thus making it useless to judge bots based on their behavior in other cases.
(Of course, they won’t think of it that way, they’ll think of it as “using our influence to promote social change” or some such. But it will happen, and has already happened for non-bot members of the media.)
Heck, Wikipedia is the prime example.
I don’t know why someone downvoted this, unless it was out of the political motivation of desiring to promote such changes in this way. It seems obviously true that this would happen.
If people on LW are using Bayesian updating properly and check comments for refutations (which some commenters love to do), then this shouldn’t be as large a problem.