Yeah this stuff might helps somewhat, but I think the core problem remains unaddressed: ad-hoc reputation systems don’t scale to thousands of researchers.
It feels like something basic like “have reviewers / area chairs rate other reviewers, and post un-anonymized cumulative reviewer ratings” (a kind of h-index for review quality) might go a long way. The double-bind structure is maintained, while providing more incentive (in terms of status, and maybe direct monetary reward) for writing good reviews.
yeah fair—my main point is that you could have a reviewer reputation system without de-anonymizing reviewers on individual papers
(alternatively, de-anonymizing reviews might improve the incentives to write good reviews on the current margin, but would also introduce other bad incentives towards sycophancy etc. which academics seem deontically opposed to)
Interesting. You’re essentially trying to set up an alternative reputation system I guess. But I don’t see what the incentive is for academics to buy into this new reputation system when they already have one (h-index). Also don’t see what the incentive is for giving honest ratings to other reviewers.
Intuition pump: Most marketplace platforms allow buyers and sellers to rate each other. This has direct usefulness to both because it influences who you buy from / sell to. Therefore there is immediate buy-in.
However, reviewing doesn’t work like this because authors and reviewers aren’t exercising much individual agency (nor should they) in determining what papers to review.
From what I understand, reviewing used to be a non-trivial part of an academic’s reputation, but relied on much smaller academic communities (somewhat akin to Dunbar’s number). So in some sense I’m not proposing a new reputation system, but a mechanism for scaling an existing one (but yeah, trying to get academics to care about a new reputation metric does seem like a pretty big lift)
I don’t really follow the market-place analogy—in a more ideal setup, reviewers would be selling a service to the conferences/journals in exchange for reputation (and possibly actual money) Reviewers would then be selected based on their previous reviewing track-record and domain of expertise. I agree that in the current setup this market structure doesn’t really hold, but this is in some sense the core problem.
Yeah this stuff might helps somewhat, but I think the core problem remains unaddressed: ad-hoc reputation systems don’t scale to thousands of researchers.
It feels like something basic like “have reviewers / area chairs rate other reviewers, and post un-anonymized cumulative reviewer ratings” (a kind of h-index for review quality) might go a long way. The double-bind structure is maintained, while providing more incentive (in terms of status, and maybe direct monetary reward) for writing good reviews.
It’s almost always only single-blind: the reviewers usually know who the authors are.
yeah fair—my main point is that you could have a reviewer reputation system without de-anonymizing reviewers on individual papers
(alternatively, de-anonymizing reviews might improve the incentives to write good reviews on the current margin, but would also introduce other bad incentives towards sycophancy etc. which academics seem deontically opposed to)
Interesting. You’re essentially trying to set up an alternative reputation system I guess. But I don’t see what the incentive is for academics to buy into this new reputation system when they already have one (h-index). Also don’t see what the incentive is for giving honest ratings to other reviewers.
Intuition pump: Most marketplace platforms allow buyers and sellers to rate each other. This has direct usefulness to both because it influences who you buy from / sell to. Therefore there is immediate buy-in.
However, reviewing doesn’t work like this because authors and reviewers aren’t exercising much individual agency (nor should they) in determining what papers to review.
From what I understand, reviewing used to be a non-trivial part of an academic’s reputation, but relied on much smaller academic communities (somewhat akin to Dunbar’s number). So in some sense I’m not proposing a new reputation system, but a mechanism for scaling an existing one (but yeah, trying to get academics to care about a new reputation metric does seem like a pretty big lift)
I don’t really follow the market-place analogy—in a more ideal setup, reviewers would be selling a service to the conferences/journals in exchange for reputation (and possibly actual money) Reviewers would then be selected based on their previous reviewing track-record and domain of expertise. I agree that in the current setup this market structure doesn’t really hold, but this is in some sense the core problem.