I perceive implementing a solution to be a much bigger barrier than finding a solution. I suspect the premier publications have a strong grip on prestige that’s very difficult to surmount. Without any outside pressure, those publications would take a huge risk to make any major changes from the status quo. But there are several thousand journals out there. I don’t get why more of them aren’t attempting major changes. The vast majority of journals can easily go up or down in prestige by quite a lot. The obscure journal that got rid of p-values got a lot more publicity than a typical journal of their stature normally would by doing that. PloS-One has rocketed upwards from out of nowhere in a short period of time even if they’re not considered a premier publication. I read Andrew Gelman semi-regularly, and he infrequently suggests multiple different solutions.
Some ways to improve the system:
Lower barrier to entry for critiques of accepted papers. A critique of a paper should not be treated the same as a regular paper. If a flaw is found, it should get published in the paper that published the original paper.
Prediction Markets—This isn’t practical for the vast majority of journals which are too small, but Science or the JAMA could if they chose to. A prediction market could be set up to predict whether the study’s major finding would be replicated in the next 10 years let’s say. If you bet yes, you would win the bet if it was replicated; you would lose the bet if it failed replication or no attempt at replication was made. This would give researchers an incentive to encourage others to independently replicate their work as no attempt is treated the same as a failed attempt. It would give bettors an incentive to verify replications for fraud as it could be worth a lot of money to them. This would only work for new findings, but I’m sure it could be modified for other types of studies.
Raw data requirements—A study can’t be published without its accompanying raw data; possibly a complete replication package although this isn’t as practical for a lot of disciplines. If the researcher isn’t a programmer, they can’t just grab everything out from R. The raw data itself though; definitely reasonable to ask for that. This wouldn’t be included in the paper copy, but who cares? On a side note, why can’t we have full color graphs? Seriously, it’s the 21st century. Nobody reads the paper copy.
Great idea about using prediction markets. I’ll think about making a few predictions PredictionBook for a few studies, but this is obviously inferior to your suggestion because the researchers in that case have incentives to be honest and encourage replication.
I perceive implementing a solution to be a much bigger barrier than finding a solution. I suspect the premier publications have a strong grip on prestige that’s very difficult to surmount. Without any outside pressure, those publications would take a huge risk to make any major changes from the status quo. But there are several thousand journals out there. I don’t get why more of them aren’t attempting major changes. The vast majority of journals can easily go up or down in prestige by quite a lot. The obscure journal that got rid of p-values got a lot more publicity than a typical journal of their stature normally would by doing that. PloS-One has rocketed upwards from out of nowhere in a short period of time even if they’re not considered a premier publication. I read Andrew Gelman semi-regularly, and he infrequently suggests multiple different solutions.
Some ways to improve the system:
Lower barrier to entry for critiques of accepted papers. A critique of a paper should not be treated the same as a regular paper. If a flaw is found, it should get published in the paper that published the original paper.
Prediction Markets—This isn’t practical for the vast majority of journals which are too small, but Science or the JAMA could if they chose to. A prediction market could be set up to predict whether the study’s major finding would be replicated in the next 10 years let’s say. If you bet yes, you would win the bet if it was replicated; you would lose the bet if it failed replication or no attempt at replication was made. This would give researchers an incentive to encourage others to independently replicate their work as no attempt is treated the same as a failed attempt. It would give bettors an incentive to verify replications for fraud as it could be worth a lot of money to them. This would only work for new findings, but I’m sure it could be modified for other types of studies.
Raw data requirements—A study can’t be published without its accompanying raw data; possibly a complete replication package although this isn’t as practical for a lot of disciplines. If the researcher isn’t a programmer, they can’t just grab everything out from R. The raw data itself though; definitely reasonable to ask for that. This wouldn’t be included in the paper copy, but who cares? On a side note, why can’t we have full color graphs? Seriously, it’s the 21st century. Nobody reads the paper copy.
Great idea about using prediction markets. I’ll think about making a few predictions PredictionBook for a few studies, but this is obviously inferior to your suggestion because the researchers in that case have incentives to be honest and encourage replication.