the hard part about this seems to be finding a control group. I’m pretty sure that the average LW reader would have done better on any test you can find that’s supposed to measure “rationality” before they’d read any of the site, where do you get a group of “people who haven’t read LW yet, but are the sort of person who might read LW”?
if we did manage to find a control group, what’s supposed to be the benefit of asking a non-LWer to decide on the tests? This is supposed to be an experiment to actually find out stuff about the universe: we have just as much interest, if not more, in its results being accurate as the average person.
Regarding 2, the reason to have a non-LWer is presumably because we are more likely to be biased and thus introduce subtle biases that favor LWians. Don’t underestimate the human capacity for self-deception.
You have to compare that to the baseline chance of someone being biased, though. It might be that the amount of bias wanting LessWrong to show actual gains brings is less than the gap between a LWer and the average.
You also have to consider that a typical scientist is less biased at work (as shown by the fact that their scientific tests tend to be more accurate than, say, their life choices or political opinions) and is used to rigorous standards in such things.
As suggested in the OP, they have to create the tests, not only evaluate their results. Even if average LWers want to find out whether LW memes are actually helpful, they are likely to be biased in choosing the criteria of rationality. For example, a test made by a LWer would more likely include a Newcombesque question where one-boxing would be classified as the rational answer, and since one-boxers are certainly more prevalent among LWers than in nearly any other group, the results would show that LW memes improve rationality. But the OP is not interested in testing whether LW memes improve LW-style extended rationality (it would be quite weird if they didn’t) but a practical, real-life relevant rationality. We are not impartial judges when it comes to determining the boundary between these two.
Or more generally, you can never be too careful about possible biases. Not seeing a reason for a self-serving bias is a pretty weak evidence for its non-existence.
Probably we should have two or three different controls. One group of average humans, one group of scientists, day traders, and entrepreneurs, and one group of nerds on the internet.
where do you get a group of “people who haven’t read LW yet, but are the sort of person who might read LW”?
They’re called newbies. People who just recently started reading LW. Measure the improvement in rationality for the control group and for the experimental, newbie group.
Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn’t a rationalist.
This seems off to me. First of all, LW rationality is a specific brand of that which focuses on pro-actively dealing with cognitive biases. Second of all, the interest that Eliezer and others have in the Singularity and related issues creates a serious status hit in the general population. Third, one doesn’t need someone who actively identifies as not a rationalist, just someone with no prior connection to LW.
Two comments:
the hard part about this seems to be finding a control group. I’m pretty sure that the average LW reader would have done better on any test you can find that’s supposed to measure “rationality” before they’d read any of the site, where do you get a group of “people who haven’t read LW yet, but are the sort of person who might read LW”?
if we did manage to find a control group, what’s supposed to be the benefit of asking a non-LWer to decide on the tests? This is supposed to be an experiment to actually find out stuff about the universe: we have just as much interest, if not more, in its results being accurate as the average person.
Regarding 2, the reason to have a non-LWer is presumably because we are more likely to be biased and thus introduce subtle biases that favor LWians. Don’t underestimate the human capacity for self-deception.
You have to compare that to the baseline chance of someone being biased, though. It might be that the amount of bias wanting LessWrong to show actual gains brings is less than the gap between a LWer and the average.
You also have to consider that a typical scientist is less biased at work (as shown by the fact that their scientific tests tend to be more accurate than, say, their life choices or political opinions) and is used to rigorous standards in such things.
It may be, but would you trust any such test run by another non-mainstream group, if they used one of their own to adjudicate the result?
Not from the outside, no.
As suggested in the OP, they have to create the tests, not only evaluate their results. Even if average LWers want to find out whether LW memes are actually helpful, they are likely to be biased in choosing the criteria of rationality. For example, a test made by a LWer would more likely include a Newcombesque question where one-boxing would be classified as the rational answer, and since one-boxers are certainly more prevalent among LWers than in nearly any other group, the results would show that LW memes improve rationality. But the OP is not interested in testing whether LW memes improve LW-style extended rationality (it would be quite weird if they didn’t) but a practical, real-life relevant rationality. We are not impartial judges when it comes to determining the boundary between these two.
Or more generally, you can never be too careful about possible biases. Not seeing a reason for a self-serving bias is a pretty weak evidence for its non-existence.
Probably we should have two or three different controls. One group of average humans, one group of scientists, day traders, and entrepreneurs, and one group of nerds on the internet.
They’re called newbies. People who just recently started reading LW. Measure the improvement in rationality for the control group and for the experimental, newbie group.
Actually, the hard part may be finding a scientist willing to risk eir career and past work to admit that ey isn’t a rationalist.
Yes, the implicit identification of “LessWrong” and “rationalist” is a local trope only.
This seems off to me. First of all, LW rationality is a specific brand of that which focuses on pro-actively dealing with cognitive biases. Second of all, the interest that Eliezer and others have in the Singularity and related issues creates a serious status hit in the general population. Third, one doesn’t need someone who actively identifies as not a rationalist, just someone with no prior connection to LW.