It’s definitely a good idea to be skeptical. There is definitely some badly-designed research out there, and some that shows less than it claims to. The best way to deal with that is to read the original papers and make sure the studies were adequately performed, although this doesn’t entirely solve the issue (see: publication bias).
It would be really nice if studies had a sort of thoroughness check list at the top of the paper next to the abstract clearly stating sample size, sampling process, number of peer reviewers, study methodology(double-blind, panel etc), and any other relevant information to the papers validity. If some sort of crude standardization could even occur within specific fields it would make cross-study comparison much easier. Or what if papers could be published online in a format inviting public criticism and community concerns would be forced to be answered to by authors.
(Sorry, I’ve yet to figure out how to link things and suchlike; can HTML be used here?)
One issue with just allowing anyone to comment on a paper though is a high proportion of misinformed or ignorant people who can hijack the discussion. LW gets round this very well with its judicious gardening, and other sites do this too, so perhaps it’s not as big an issue as I’m making it out to be. Unmoderated comment forums tend to turn into slimepits though.
The best way to deal with that is to read the original papers and make sure the studies were adequately performed
And even just reading the abstracts is already a huge step forward for epistemic hygiene because science reporting and journalism can be so damn shoddy (besides, I regularly find that the abstracts are easier to read and understand than their popularizations).
I generally agree. I have an aversion to just reading abstracts because it doesn’t let you get at the nitty-gritty of how exactly the studies were performed, but it’s way better than just reading the news reports—and not everyone has full-text access to studies anyway.
The Economist recently had an article about how sitting in wobbly furniture makes people crave “emotional stability.” They also mention a study finding that people sitting in chairs that lean to the left reported more liberal opinions.
The difference is not huge, but it is statistically significant. Even a small amount of environmental wobbliness seems to promote a desire for an emotional rock to cling to.
It’s definitely a good idea to be skeptical. There is definitely some badly-designed research out there, and some that shows less than it claims to. The best way to deal with that is to read the original papers and make sure the studies were adequately performed, although this doesn’t entirely solve the issue (see: publication bias).
It would be really nice if studies had a sort of thoroughness check list at the top of the paper next to the abstract clearly stating sample size, sampling process, number of peer reviewers, study methodology(double-blind, panel etc), and any other relevant information to the papers validity. If some sort of crude standardization could even occur within specific fields it would make cross-study comparison much easier. Or what if papers could be published online in a format inviting public criticism and community concerns would be forced to be answered to by authors.
This already happens in some cases. PLoS One, for example, publishes open-access entirely online and invites community criticism:
http://www.plosone.org/static/information.action
(Sorry, I’ve yet to figure out how to link things and suchlike; can HTML be used here?)
One issue with just allowing anyone to comment on a paper though is a high proportion of misinformed or ignorant people who can hijack the discussion. LW gets round this very well with its judicious gardening, and other sites do this too, so perhaps it’s not as big an issue as I’m making it out to be. Unmoderated comment forums tend to turn into slimepits though.
And even just reading the abstracts is already a huge step forward for epistemic hygiene because science reporting and journalism can be so damn shoddy (besides, I regularly find that the abstracts are easier to read and understand than their popularizations).
I generally agree. I have an aversion to just reading abstracts because it doesn’t let you get at the nitty-gritty of how exactly the studies were performed, but it’s way better than just reading the news reports—and not everyone has full-text access to studies anyway.
The Economist recently had an article about how sitting in wobbly furniture makes people crave “emotional stability.” They also mention a study finding that people sitting in chairs that lean to the left reported more liberal opinions.
http://www.economist.com/node/21558553
As far as I can tell they are completely serious.