Create incentives to catch misconduct seems the simplest solution. Some percentage of each grant should be set aside, not to those conducting the study, but to those who follow up on the study, with some award set aside for the first invalidation of its results. Set up a percentage penalty system where grant seekers working for universities with high levels of recent invalidations (expiring after some period of time) get fewer grants going forward, to incentivize at a systems level.
Some percentage of each grant should be set aside, not to those conducting the study, but to those who follow up on the study, with some award set aside for the first invalidation of its results.
I may be missing something, but… If there is a price for disproving a study that is a percent of the cost of the original study, then isn’t that just making it lower payoff to cheat on the original study, and higher payoff to cheat on the disproving study?
That is, if I don’t like the results that “Product X is ineffective”, and I am willing to fund the study to disprove that claim, isn’t it likely that I can more easily find a willing-to-fudge research team (because they are going to potentially get a bonus from the original grant’s invalidation bonus)?
My understanding of this would be that that the original grant would be split something like 90%/10% (grant/invalidation bonus), and the second grant 90%/10% (grant/invalidation bonus) + 10% of the previous grant (if the original study is invalidated).
Your disproving study can itself be disproved, thus claiming a portion of the funding allocated to you, and reducing your systems-level reputation and hence grant approvals.
Create incentives to catch misconduct seems the simplest solution
That effectiveness of this solution depends, in particular on what other incentives are there.
Imagine a poor crime-ridden neighbourhood where police put up “Rat on your neighbours—we pay for tips!” posters. That’s “incentives to catch misconduct”, but even if you collect the tip you still have to live in the neighbourhood and I expect that being a known snitch carries a heavy price.
...is regarded by those within the fields in such a manner?
By some, certainly. I expect the prevalence to vary depending on the field. In, say, physics, not so much, but in things like gender studies, close to 100%.
For example, you won’t be invited as a co-author for papers. People will exclude your from research groups. Reviewers will be nasty to your submissions.
Create incentives to catch misconduct seems the simplest solution. Some percentage of each grant should be set aside, not to those conducting the study, but to those who follow up on the study, with some award set aside for the first invalidation of its results. Set up a percentage penalty system where grant seekers working for universities with high levels of recent invalidations (expiring after some period of time) get fewer grants going forward, to incentivize at a systems level.
I may be missing something, but… If there is a price for disproving a study that is a percent of the cost of the original study, then isn’t that just making it lower payoff to cheat on the original study, and higher payoff to cheat on the disproving study?
That is, if I don’t like the results that “Product X is ineffective”, and I am willing to fund the study to disprove that claim, isn’t it likely that I can more easily find a willing-to-fudge research team (because they are going to potentially get a bonus from the original grant’s invalidation bonus)?
My understanding of this would be that that the original grant would be split something like 90%/10% (grant/invalidation bonus), and the second grant 90%/10% (grant/invalidation bonus) + 10% of the previous grant (if the original study is invalidated).
Your disproving study can itself be disproved, thus claiming a portion of the funding allocated to you, and reducing your systems-level reputation and hence grant approvals.
That effectiveness of this solution depends, in particular on what other incentives are there.
Imagine a poor crime-ridden neighbourhood where police put up “Rat on your neighbours—we pay for tips!” posters. That’s “incentives to catch misconduct”, but even if you collect the tip you still have to live in the neighbourhood and I expect that being a known snitch carries a heavy price.
Do you think objectivity and willingness to challenge ideas in science is regarded by those within the fields in such a manner?
I mean, it wouldn’t surprise me, but if it’s gotten that far, I think the problem may have gotten beyond a simple remedy.
By some, certainly. I expect the prevalence to vary depending on the field. In, say, physics, not so much, but in things like gender studies, close to 100%.
Duh… X-/
I don’t see how this point carries over to the problem at hand.… what’s the heavy price for the scientist snitch?
For example, you won’t be invited as a co-author for papers. People will exclude your from research groups. Reviewers will be nasty to your submissions.
The bigger problem with that, is that the police will be flooded with false tips.