Basically, how good are LW readers at being right about tricky stuff relative to similarly intelligent folk who aren’t LW readers?
I would think this would encompass (non-exhaustively) decision-making under risk and uncertainty, inference from incomplete information, making accurate predictions, and being well-calibrated.
Precisely what you would test (which may have been your point) is a very good question and is not at all obvious.
My original idea was to test the extent of cognitive biases (known to be possible as scientific testing discovered them in the first place), but that works too- either would serve the idea’s purposes.
Not at all a dumb question.
I assume the idea is to test “Rationality”.
Basically, how good are LW readers at being right about tricky stuff relative to similarly intelligent folk who aren’t LW readers?
I would think this would encompass (non-exhaustively) decision-making under risk and uncertainty, inference from incomplete information, making accurate predictions, and being well-calibrated.
Precisely what you would test (which may have been your point) is a very good question and is not at all obvious.
My original idea was to test the extent of cognitive biases (known to be possible as scientific testing discovered them in the first place), but that works too- either would serve the idea’s purposes.