You get points for catching these mistakes. Perhaps you submit your busts privately to some arbiter so others have the same challenge.
Later, the error is revealed and discussed.
This would also have the benefit of causing everyone to read the most-respected members’ writings ultra-critically, rather than sitting back and being spoon-fed.
One key thing this idea has is short term feedback. Frequent, rapid feedback is essential for getting good at this kind of thing. (IMO that’s why economics is still so useless relative to the other sciences: the experiments take fifty years to run.)
This doesn’t work, because people here say controversial things. By definition, controversial means that many people think they are wrong, but they do not think they are wrong themselves. Anyone who finds a mistake might have found one of the intentional mistakes, or might happen to disagree on a controversial issue and believes the community member made a mistake where the community member thinks otherwise.
Unless you think that community members are perfectly correct 100% of the time on controversial issues or at least always recognize their own mistakes when pointed out to them (and no human being is like that), the idea will become unworkable. Everyone will have to think “is this an intentional misake, or is an unintentional mistake that the community member won’t recognize as such, earning me demerits for pointing it out?”
There are objective ways of finding out some classes of mistakes. Fallacies are well-defined and most of them can be easily diagnosed. I often do this at Facebook to blow off steam.
Even better: the website can accomodate for this. It’s as easy as adding a “report logical fallacy” button next to each comment. Moderators can award points to all who noticed the correct fallacy. A leaderboard can be put up. It can be made a sport.
Another benefit is that those who make mistakes receive detailed feedback.
Edit: I’d like to learn why this was downvoted. How might I be wrong?
I can see the need for anonymity to avoid spoilers, but I think doing the thing publicly has benefits too—that way there’s the risk on the other side of having publicly denounced the Great Teacher when he was speaking truthfully.
You could have private points subtracted off and that gives you the same incentive not to make uncertain accusations. Attach confidence levels and take Bayes-score.
With the Bayes-score being always negative, I don’t see what incentive one would have to submit a mistake report. I think it would be better to test for better than, for example, 90% confidence, by awarding 1 point for a correct report and deducting 9 points for an incorrect report. This achieves the goal of detecting ability to detect bad arguments. Measuring calibration would have to be a seperate test.
Treat not submitting a mistake report as the “I have no idea” claim: that you’ve assigned a probability of “mistakes/total emails” to this particular email being a mistake.
Occasionally, well-respected community members could say things that are intentionally false, but persuasive and subtle, a la http://www.overcomingbias.com/2008/02/my-favorite-lia.html.
You get points for catching these mistakes. Perhaps you submit your busts privately to some arbiter so others have the same challenge.
Later, the error is revealed and discussed.
This would also have the benefit of causing everyone to read the most-respected members’ writings ultra-critically, rather than sitting back and being spoon-fed.
One key thing this idea has is short term feedback. Frequent, rapid feedback is essential for getting good at this kind of thing. (IMO that’s why economics is still so useless relative to the other sciences: the experiments take fifty years to run.)
This doesn’t work, because people here say controversial things. By definition, controversial means that many people think they are wrong, but they do not think they are wrong themselves. Anyone who finds a mistake might have found one of the intentional mistakes, or might happen to disagree on a controversial issue and believes the community member made a mistake where the community member thinks otherwise.
Unless you think that community members are perfectly correct 100% of the time on controversial issues or at least always recognize their own mistakes when pointed out to them (and no human being is like that), the idea will become unworkable. Everyone will have to think “is this an intentional misake, or is an unintentional mistake that the community member won’t recognize as such, earning me demerits for pointing it out?”
There are objective ways of finding out some classes of mistakes. Fallacies are well-defined and most of them can be easily diagnosed. I often do this at Facebook to blow off steam.
Even better: the website can accomodate for this. It’s as easy as adding a “report logical fallacy” button next to each comment. Moderators can award points to all who noticed the correct fallacy. A leaderboard can be put up. It can be made a sport.
Another benefit is that those who make mistakes receive detailed feedback.
Edit: I’d like to learn why this was downvoted. How might I be wrong?
Nothing makes me want to upvote someone like a downvote-without-comment on a post that seems vaguely reasonable.
I can see the need for anonymity to avoid spoilers, but I think doing the thing publicly has benefits too—that way there’s the risk on the other side of having publicly denounced the Great Teacher when he was speaking truthfully.
You could have private points subtracted off and that gives you the same incentive not to make uncertain accusations. Attach confidence levels and take Bayes-score.
With the Bayes-score being always negative, I don’t see what incentive one would have to submit a mistake report. I think it would be better to test for better than, for example, 90% confidence, by awarding 1 point for a correct report and deducting 9 points for an incorrect report. This achieves the goal of detecting ability to detect bad arguments. Measuring calibration would have to be a seperate test.
Treat not submitting a mistake report as the “I have no idea” claim: that you’ve assigned a probability of “mistakes/total emails” to this particular email being a mistake.