This post is so good! I was just thinking if this framework could be useful for prediction business, where the Foundational Understanding is crowd-sourced through e.g. academic literature, open data, manual curator. Ontologies might be created and curated by public consortia and evaluation could be a private-public endeavour.
libero
″ That’s useful when you have many professionals who need a common language but which disagree about the causes of mental illnesses. ”
Using the proposed framework, it means that the field lacks Foundational Understanding. Thus I wouldn’t feel comfortable calling the DSM an ontology, though there is i.e. the Mental Disease Ontology, which sometimes maps to DSM.
The title doesn’t seem to fit well the question: P-Hacking Detection does not map well to replicability, even if the presence of P-hacking usually means that the study will not replicate.
I’m interested in automatic summarization of papers key characteristics (PICO, sample size, methods) and I’m starting to build something soon.
Subscribe/get notification for new comments in a post. I have already enough tabs open on the browser to keep track of all the interesting posts :)
I’m interested
Remdesevir (lopinavir + ritonavir) (HIV)
A little mistake with the parenthesis, they’re different things
My prior for fomite transmission in respiratory viruses is very low: 1988 article on Rhinovirus, hamsters SARS-CoV-2 study, human case series I don’t have time to make a serious review though.
The authors say “non negligible” though. And it’s a simulation study. Besides in the limitations section they acknowledge the absence of literature on many biological parameters.
Kudos for trying to address the issue, late is better than never. If you believe there are possible risks if the methods were used, the first things to do would be a retraction, since an erratum/corrigendum presuppose the consistency of the conclusion. Acting quickly could also prevent legal troubles.
We don’t really have a metric for meaning or impact though.
And even if we had decent metrics they would gain value with time since the impact of a discovery becomes evident only after a while (think patents, landmark papers, new disciplines).
It appears to me that the incentives system is the real issue here. UBI or some basic job might release a lot of people from their publishing cages, allowing them to work on research fundamentals: gathering good data, working on theory and methodology, replicating studies.