I asked a tangentially related question two years ago (to the day, as it happens). I made the comparison with an eye to performance maintenance/enhancement, rather than accountability, but it feels like they are two sides of the same coin.
I disagree with most of the other commenters about the problems of generating metrics. The simplest answer is that a lot of metric-style things are already known and used when comparing two different academics aside from citations or IQ, such as:
Age of first degree
Age of first publication
Scores on standardized tests, like SAT/ACT/GRE
Placement in national competitions/exams
Patents with their name
Patents based on their work
These are the kinds of things people reach for when comparing historical academics, for example. Then you could highlight other performance metrics that aren’t necessarily their core area of expertise, but indicate a more general ability:
Publications outside their core specialization
Credentials outside their core specialization
Correctness of grammar/spelling of the languages in which they publish
Number of languages spoken
Number of languages in which they published
Success of their students
Then you can consider subjective and informal things:
Evidence of reputation among their peers
Popularity of talks or lectures not about one of their own publications
For example, consider that Von Neumann is considered a strong candidate for the smartest human ever to live, and it seems to me when this is being discussed the centerpiece of the argument boils down to all the other legendary minds of the day saying something to the effect of “this guy makes me feel like an idiot” in private correspondence.
But this information is all buried in biographies and curricula vitae. It isn’t gathered and tracked systematically, largely because it seems like there is no body incentivized to do so. This is what I see as the crux of it; there is no intellectual institution with similar incentives to the sports leagues or ESPN.
I asked a tangentially related question two years ago (to the day, as it happens). I made the comparison with an eye to performance maintenance/enhancement, rather than accountability, but it feels like they are two sides of the same coin.
I disagree with most of the other commenters about the problems of generating metrics. The simplest answer is that a lot of metric-style things are already known and used when comparing two different academics aside from citations or IQ, such as:
Age of first degree
Age of first publication
Scores on standardized tests, like SAT/ACT/GRE
Placement in national competitions/exams
Patents with their name
Patents based on their work
These are the kinds of things people reach for when comparing historical academics, for example. Then you could highlight other performance metrics that aren’t necessarily their core area of expertise, but indicate a more general ability:
Publications outside their core specialization
Credentials outside their core specialization
Correctness of grammar/spelling of the languages in which they publish
Number of languages spoken
Number of languages in which they published
Success of their students
Then you can consider subjective and informal things:
Evidence of reputation among their peers
Popularity of talks or lectures not about one of their own publications
For example, consider that Von Neumann is considered a strong candidate for the smartest human ever to live, and it seems to me when this is being discussed the centerpiece of the argument boils down to all the other legendary minds of the day saying something to the effect of “this guy makes me feel like an idiot” in private correspondence.
But this information is all buried in biographies and curricula vitae. It isn’t gathered and tracked systematically, largely because it seems like there is no body incentivized to do so. This is what I see as the crux of it; there is no intellectual institution with similar incentives to the sports leagues or ESPN.