The first three metrics seem like they could even more strongly encourage sexy bogus findings by giving the general public more of a role
Reference manager data could have the same effect, despite reference managers being disproportionately used by researchers rather than laypeople.
Using myself as an example, I sometimes save interesting articles about psychology, medicine, epidemiology and such that I stumble on, even though I’m not officially in any of those fields. If a lot of researchers are like me in this respect (admittedly a big if) then sexy, bogus papers in popular generalist journals stand a good chance of bubbling to the top of Mendeley/Zotero/etc. rankings.
Come to think of it, a handful of the papers I’ve put in my Mendeley database are there because I think they’re crap, and I want to keep a record of them! This raises the comical possibility of papers scoring highly on altmetrics because scientists are doubtful of them!
(jmmcd points out that PageRanking users might help, although even that’d rely on strongly weighted researchers being less prone to the behaviours I’m talking about.)
This is an issue with efforts to encourage replication and critique of dubious studies: in addition to wasting a lot of resources replicating false positives, you have to cite the paper you’re critiquing, which boosts its standing in mechanical academic merit assessments like those used in much UK science funding.
15 years ago, the academic search engine Citeseer was designed not just with the goal of finding academic papers, identifying which ones were the same, and counting citations, but, as indicated in its name, showing the user the context of the citations, to see if they were positive or negative.
I’ve occasionally wished for this myself. I look forward to semantic analysis being good enough to apply to academic papers, so computers can estimate the proportion of derogatory references to a paper instead of mechanically counting all references as positive.
Reference manager data could have the same effect, despite reference managers being disproportionately used by researchers rather than laypeople.
Using myself as an example, I sometimes save interesting articles about psychology, medicine, epidemiology and such that I stumble on, even though I’m not officially in any of those fields. If a lot of researchers are like me in this respect (admittedly a big if) then sexy, bogus papers in popular generalist journals stand a good chance of bubbling to the top of Mendeley/Zotero/etc. rankings.
Come to think of it, a handful of the papers I’ve put in my Mendeley database are there because I think they’re crap, and I want to keep a record of them! This raises the comical possibility of papers scoring highly on altmetrics because scientists are doubtful of them!
(jmmcd points out that PageRanking users might help, although even that’d rely on strongly weighted researchers being less prone to the behaviours I’m talking about.)
This is an issue with efforts to encourage replication and critique of dubious studies: in addition to wasting a lot of resources replicating false positives, you have to cite the paper you’re critiquing, which boosts its standing in mechanical academic merit assessments like those used in much UK science funding.
We would need a scientific equivalent of the “nofollow” attribute in HTML. A special kind of citation meaning: “this is wrong”.
15 years ago, the academic search engine Citeseer was designed not just with the goal of finding academic papers, identifying which ones were the same, and counting citations, but, as indicated in its name, showing the user the context of the citations, to see if they were positive or negative.
I’ve occasionally wished for this myself. I look forward to semantic analysis being good enough to apply to academic papers, so computers can estimate the proportion of derogatory references to a paper instead of mechanically counting all references as positive.