I think you’re forgetting the problem of incentives. Whatever standard procedures for evaluating/predicting usefulness you come up with, if they’re actually used to allocate resources and status in practice, people will have the incentive to hack them by designing and presenting their own work to come off as better than it really is. And since people who do research are usually very smart, you’ll be faced with a host of extremely smart people trying to outsmart and cheat your metrics, in which many will surely be successful. Goodhart’s law, and all that.
This, of course, is not even considering whether the influential people whom you’d have to win over to establish such practices have the incentive to submit their past and present work to such evaluation. Unfortunately, although the problems you point out are very real, there is no straightforward solution for them; almost any attempt at fixing institution is likely to run into difficult and unpredictable problems with perverse incentives.
My goal is not to convince the research community to switch focus or to prompt sweeping institutional changes.
I know a small number of extremely intelligent and otherwise altruistic people who do pure research (and if my life had gone slightly differently it seems like I might have become one). My goal is to convince such people to think seriously about what they are doing with their time.
This could be alleviated by making the standards sufficiently retrospective, e.g. evaluate the usefulness of current work in 100 years (which would probably make it more effective anyways).
We could also test these predictions on historical data, although it might be slightly trickier.
I think you’re forgetting the problem of incentives. Whatever standard procedures for evaluating/predicting usefulness you come up with, if they’re actually used to allocate resources and status in practice, people will have the incentive to hack them by designing and presenting their own work to come off as better than it really is. And since people who do research are usually very smart, you’ll be faced with a host of extremely smart people trying to outsmart and cheat your metrics, in which many will surely be successful. Goodhart’s law, and all that.
This, of course, is not even considering whether the influential people whom you’d have to win over to establish such practices have the incentive to submit their past and present work to such evaluation. Unfortunately, although the problems you point out are very real, there is no straightforward solution for them; almost any attempt at fixing institution is likely to run into difficult and unpredictable problems with perverse incentives.
My goal is not to convince the research community to switch focus or to prompt sweeping institutional changes.
I know a small number of extremely intelligent and otherwise altruistic people who do pure research (and if my life had gone slightly differently it seems like I might have become one). My goal is to convince such people to think seriously about what they are doing with their time.
This could be alleviated by making the standards sufficiently retrospective, e.g. evaluate the usefulness of current work in 100 years (which would probably make it more effective anyways).
We could also test these predictions on historical data, although it might be slightly trickier.