• It commits a category error by calling a frequentist p-value calculation “Bayesian inversion,” which muddles two distinct inferential frameworks. • It overstates trust in peer-review (“we can only trust articles in peer-reviewed journals”) and cites a paper to claim conspiracies are “mathematically impossible,” ignoring evidence from the reproducibility crisis showing peer review is insufficient on its own. • It conflates basic logic with ZFC set theory and invents a “Mathematicity Hypothesis” as if this were scientific consensus; logic does not depend on ZFC, and this framing is idiosyncratic at best. • The Socrates syllogism “anti-example” misfires: deductive validity isn’t established by experiment; experiments test the empirical premises, not the inference rule. • The null-hypothesis section claims exact hypotheses have probability zero, which is only a Bayesian statement under continuous priors and is not a general truth of hypothesis testing; the ASA’s guidance directly cautions against the very confusions the piece leans on. • Occam’s razor is reduced to Kolmogorov complexity while ignoring widely used, practical formalizations of parsimony vs. fit (MDL/AIC/BIC), leaving the advice unhelpful for real model selection. • It claims to compile points “recognized by the global consensus” while stitching together mutually contested philosophies (Popper, Lakatos, Sagan, frequentist testing, Bayesian talk) without acknowledging deep disagreements. • Declaring the “unit of scientific activity is an experiment” sidelines theory, measurement, simulation, and observational sciences; the later nod to “passive experiments” doesn’t resolve the overreach.
This comment seems to be generated by a large language model with prompt ‘Find anything that could be pointed out as wrong or incomplete in this article’.
1. No, the Bayesian inversion mention is not about p-value in any way. 2. This sounds as raising the standards for claiming anything, but without any alternative provided; fine, let’s agree that peer review is not enough, then what is? 3. Yes, consistency of ZFC IS a scientific consensus at the time. No, the article doesn’t claim that logic depends on ZFC—quite the opposite, ZFC depends on logic, or logic can be thought of as the bottom level of ZFC. 4. The experiment in the anti-example should not test the inference in any way, it is invoked only to test the conclusion. 5. Sounds as an addition to the provided information, not as something that contradicts it. 6. MDL is almost equivalent to Kolmogorov complexity, and AIC and BIC are purely numeric mechanics which are actually weaker than Kolmogorov complexity. Nowhere this article does claim it is aimed to help you select a real mathematical model for something. 7. Interesting point, but I cannot see how it invalidates anything said in the article. Disagreements between philosophers seem more of a personal issue. 8. This would have been a valid criticism if there were no paragraphs except number 4 in the article, but the rest of the text captures everything else quite exhaustively.
Someone makes a 1-day account, posts this, and then disappears for two years, ignoring all criticism. And then what? The cult leaders “listen” to your feelings just to push you into what they want again?
And now you’re back to using your 1-day accounts, trying to game the karma buttons?
I said I don’t want to have a conversation with a cult in a local group — same here. I’ve done my best to point out the most serious issues I can. I’m sure that right now we don’t have any formal definition of the Scientific Method, and the only way to be sure it works is by looking at results, not the process itself — at least for now. But even if someone wants to formalize it in some way, it would still be a good idea to adjust and respond to comments, not just, at any cost, try to prove you were right from the start. Just my thoughts.
That’s all from me for you, for the cult, and for the cult leaders-readers who read everything and never act themselves. Hi, Lesia ;)
• It commits a category error by calling a frequentist p-value calculation “Bayesian inversion,” which muddles two distinct inferential frameworks.
• It overstates trust in peer-review (“we can only trust articles in peer-reviewed journals”) and cites a paper to claim conspiracies are “mathematically impossible,” ignoring evidence from the reproducibility crisis showing peer review is insufficient on its own.
• It conflates basic logic with ZFC set theory and invents a “Mathematicity Hypothesis” as if this were scientific consensus; logic does not depend on ZFC, and this framing is idiosyncratic at best.
• The Socrates syllogism “anti-example” misfires: deductive validity isn’t established by experiment; experiments test the empirical premises, not the inference rule.
• The null-hypothesis section claims exact hypotheses have probability zero, which is only a Bayesian statement under continuous priors and is not a general truth of hypothesis testing; the ASA’s guidance directly cautions against the very confusions the piece leans on.
• Occam’s razor is reduced to Kolmogorov complexity while ignoring widely used, practical formalizations of parsimony vs. fit (MDL/AIC/BIC), leaving the advice unhelpful for real model selection.
• It claims to compile points “recognized by the global consensus” while stitching together mutually contested philosophies (Popper, Lakatos, Sagan, frequentist testing, Bayesian talk) without acknowledging deep disagreements.
• Declaring the “unit of scientific activity is an experiment” sidelines theory, measurement, simulation, and observational sciences; the later nod to “passive experiments” doesn’t resolve the overreach.
This comment seems to be generated by a large language model with prompt ‘Find anything that could be pointed out as wrong or incomplete in this article’.
1. No, the Bayesian inversion mention is not about p-value in any way.
2. This sounds as raising the standards for claiming anything, but without any alternative provided; fine, let’s agree that peer review is not enough, then what is?
3. Yes, consistency of ZFC IS a scientific consensus at the time. No, the article doesn’t claim that logic depends on ZFC—quite the opposite, ZFC depends on logic, or logic can be thought of as the bottom level of ZFC.
4. The experiment in the anti-example should not test the inference in any way, it is invoked only to test the conclusion.
5. Sounds as an addition to the provided information, not as something that contradicts it.
6. MDL is almost equivalent to Kolmogorov complexity, and AIC and BIC are purely numeric mechanics which are actually weaker than Kolmogorov complexity. Nowhere this article does claim it is aimed to help you select a real mathematical model for something.
7. Interesting point, but I cannot see how it invalidates anything said in the article. Disagreements between philosophers seem more of a personal issue.
8. This would have been a valid criticism if there were no paragraphs except number 4 in the article, but the rest of the text captures everything else quite exhaustively.
The cult is back…
Someone makes a 1-day account, posts this, and then disappears for two years, ignoring all criticism. And then what? The cult leaders “listen” to your feelings just to push you into what they want again?
And now you’re back to using your 1-day accounts, trying to game the karma buttons?
I said I don’t want to have a conversation with a cult in a local group — same here. I’ve done my best to point out the most serious issues I can. I’m sure that right now we don’t have any formal definition of the Scientific Method, and the only way to be sure it works is by looking at results, not the process itself — at least for now. But even if someone wants to formalize it in some way, it would still be a good idea to adjust and respond to comments, not just, at any cost, try to prove you were right from the start. Just my thoughts.
That’s all from me for you, for the cult, and for the cult leaders-readers who read everything and never act themselves. Hi, Lesia ;)