I was using “athletes” as a thought experiment. I do think it’s worth considering and having a bunch of clear objective metrics could be interesting and useful, especially if done gradually and with the right summary stats. However, the first steps for metrics of intellectuals would be subjective reviews and evaluations and similar.
Things will also get more interesting as we get better AI and similar to provide interesting stats that aren’t exactly “boring objective stats” but also not quite “well thought out reviews” either.
I think you might enjoy getting into things like Replication Watch and similar efforts to discover scientific fraud and push for better standards for scientific publishing. There is an effort in the scientific world to bring statistical and other tools to bear on policing papers and entire fields for p-hacking, publication bias and the file drawer problem, and outright fraud. This seems to me the mainline effort to do what you’re talking about.
Here on LW, Elizabeth has been doing posts on what she calls “Epistemic Spot Checks,” to try and figure out how a non-expert could quickly vet the quality of a book they’re reading without having to be an expert in the field itself. I’d recommend reading her posts in general, she’s got something going on.
While I don’t think these sorts of efforts are going to ever result in the kind of crisp, objective, powerfully useful statistics that characterize sabermetrics, I suspect that just about every area of life could benefit from just a little bit more statistical rigor. And certainly, holding intellectuals to a higher public standard is a worthy goal.
That’s a good point, I think it’s fair here.
I was using “athletes” as a thought experiment. I do think it’s worth considering and having a bunch of clear objective metrics could be interesting and useful, especially if done gradually and with the right summary stats. However, the first steps for metrics of intellectuals would be subjective reviews and evaluations and similar.
Things will also get more interesting as we get better AI and similar to provide interesting stats that aren’t exactly “boring objective stats” but also not quite “well thought out reviews” either.
I think you might enjoy getting into things like Replication Watch and similar efforts to discover scientific fraud and push for better standards for scientific publishing. There is an effort in the scientific world to bring statistical and other tools to bear on policing papers and entire fields for p-hacking, publication bias and the file drawer problem, and outright fraud. This seems to me the mainline effort to do what you’re talking about.
Here on LW, Elizabeth has been doing posts on what she calls “Epistemic Spot Checks,” to try and figure out how a non-expert could quickly vet the quality of a book they’re reading without having to be an expert in the field itself. I’d recommend reading her posts in general, she’s got something going on.
While I don’t think these sorts of efforts are going to ever result in the kind of crisp, objective, powerfully useful statistics that characterize sabermetrics, I suspect that just about every area of life could benefit from just a little bit more statistical rigor. And certainly, holding intellectuals to a higher public standard is a worthy goal.