If we haven’t even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here?
Attracting people who enjoy spending time with internet friends and feeling superior to outgroups?
I think this is the most likely failure mode of less wrong and am unsure of why not much is being done to address it. The idea of CFAR is great; its success just isn’t tied to less wrong.
(Please don’t reply to or vote on this comment unless you have more than 200 predictions on predictionbook.)
Edit: Apparently people are okay with using a metric for agreement (upvoting) as a barrier for downvoting on LW, but not metrics for calibration (number of predictionbook predictions).
Attracting people who enjoy spending time with internet friends and feeling superior to outgroups?
I think this is the most likely failure mode of less wrong and am unsure of why not much is being done to address it. The idea of CFAR is great; its success just isn’t tied to less wrong.
(Please don’t reply to or vote on this comment unless you have more than 200 predictions on predictionbook.)
Edit: Apparently people are okay with using a metric for agreement (upvoting) as a barrier for downvoting on LW, but not metrics for calibration (number of predictionbook predictions).
If it makes you feel better, one of the downvotes is mine and I have the most predictions of all on PredictionBook.
Downvoted for this. You shouldn’t try to stop people voting on your comments.