I would reduce the point score of “A simple math proof proves X” from 20 down to 16. As far as I know, there is no literature on how faulty mathematical proofs are, but from personal history and anecdotes from others (I have many times found “bugs” in my proofs (5 points), and someone I know working in formal verification would often update me on the errors they found in published proofs (8 points)). I’d give a higher weight (18 points) to formally verified proofs. Not deducting more proofs because of the curious observation that even if a proof is faulty, the result it shows is usually true (just as neural networks want to learn, mathematical proofs want to be correct).
Additionally, proofs purportedly about real-world phenomena are often importance-hacked and one needs to read the exact result pretty carefully, which is often not done.
I find it amusing that GPT-4 considers meta-analyses to be worsening the results they attempt to pool together.
It’s possibly an error, but interestingly, that is also in line with more recent meta-analytic thinking: meta-analyses can be worse than individual RCTs because they simply wind up pooling the systematic errors from all the studies, yielding inflated effect sizes and overly-narrow CIs compared to the best RCTs (which may have a larger sampling error compared to the meta-analysis, but more than makes up for it in having less systematic error).
An example of this would be the Many Labs where the well-powered pre-registered Many Labs replications turned in systematically much smaller effect sizes than not just the original papers, but the meta-analyses of their subsequent literatures as well. The meta-analyses yielded smaller and better estimates than the p-hacked original paper, true, but still were far from the truth.
I like this!
I would reduce the point score of “A simple math proof proves X” from 20 down to 16. As far as I know, there is no literature on how faulty mathematical proofs are, but from personal history and anecdotes from others (I have many times found “bugs” in my proofs (5 points), and someone I know working in formal verification would often update me on the errors they found in published proofs (8 points)). I’d give a higher weight (18 points) to formally verified proofs. Not deducting more proofs because of the curious observation that even if a proof is faulty, the result it shows is usually true (just as neural networks want to learn, mathematical proofs want to be correct).
Additionally, proofs purportedly about real-world phenomena are often importance-hacked and one needs to read the exact result pretty carefully, which is often not done.
I find it amusing that GPT-4 considers meta-analyses to be worsening the results they attempt to pool together.
It’s possibly an error, but interestingly, that is also in line with more recent meta-analytic thinking: meta-analyses can be worse than individual RCTs because they simply wind up pooling the systematic errors from all the studies, yielding inflated effect sizes and overly-narrow CIs compared to the best RCTs (which may have a larger sampling error compared to the meta-analysis, but more than makes up for it in having less systematic error).
An example of this would be the Many Labs where the well-powered pre-registered Many Labs replications turned in systematically much smaller effect sizes than not just the original papers, but the meta-analyses of their subsequent literatures as well. The meta-analyses yielded smaller and better estimates than the p-hacked original paper, true, but still were far from the truth.