some method of incentivizing novelty / importance

Citation count clearly isn’t a good measure of accuracy, but it’s likely a good measure of importance in a field. So we could run some kind of expected value calculation where the usefulness of a paper is measured by P(result is true) * (# of citations) - P(result is false) * (# of citations) = (# of citations) * [P(result is true) - P(result is false)].

Edit: where the probabilities are approximated by replication markets. I think this function gives us what we actually want, so optimizing institutions to maximize it seems like a good idea.

Edit: This doesn’t actually represent what we want, since journals can just force everyone to cite the same well replicated study to maximize citation count on that, but it’s a good approximation. Not a great goal, but a good measurement of what we want, but we shouldn’t optimize institutions to maximize it.

Citation count clearly isn’t a good measure of accuracy, but it’s likely a good measure of importance in a field. So we could run some kind of expected value calculation where the usefulness of a paper is measured by

`P(result is true) * (# of citations) - P(result is false) * (# of citations) = (# of citations) * [P(result is true) - P(result is false)]`

.Edit: where the probabilities are approximated by replication markets. I think this function gives us what we actually want, so optimizing institutions to maximize it seems like a good idea.

Edit: This doesn’t actually represent what we want, since journals can just force everyone to cite the same well replicated study to maximize citation count on that, but it’s a good approximation. Not a great goal, but a good measurement of what we want, but we shouldn’t optimize institutions to maximize it.