In the biological sciences, one often finds claims of interestingness or usefulness in the abstracts, introductions, and conclusion sections of research papers. Research may claim to overthrow existing paradigms, for example, or lead to disease cures. Presumably one also finds these claims or promises in research proposals. But I’m not sure how one evaluates research papers and programs for how much interestingness and usefulness they actually deliver.
Some kind of citation metric, presumably. But how do we distinguish between being cited for being interesting vs being cited for being useful?
A citation metric seems like a bad way of evaluating usefulness, but a good measure of another type of interestingness (are papers cited often in the next year likely to contain useful insights?)
To determine usefulness we need to look at something other than publications. We can hope to estimate how the state of modern theory affects modern practice—what ideas or modes of thinking are important, what techniques are used in practice, etc. Looking back, we then have some leverage to understand what research programs helped advance our understanding in a relevant way, or were indirectly necessary for the development of practically important techniques.
We probably want something automatable, though. Maybe look at the flow of key words and phrases (ones that grouped papers tend to share with each other and not with other papers) from the literature of pure science to engineering and industry?
In the biological sciences, one often finds claims of interestingness or usefulness in the abstracts, introductions, and conclusion sections of research papers. Research may claim to overthrow existing paradigms, for example, or lead to disease cures. Presumably one also finds these claims or promises in research proposals. But I’m not sure how one evaluates research papers and programs for how much interestingness and usefulness they actually deliver.
Some kind of citation metric, presumably. But how do we distinguish between being cited for being interesting vs being cited for being useful?
A citation metric seems like a bad way of evaluating usefulness, but a good measure of another type of interestingness (are papers cited often in the next year likely to contain useful insights?)
To determine usefulness we need to look at something other than publications. We can hope to estimate how the state of modern theory affects modern practice—what ideas or modes of thinking are important, what techniques are used in practice, etc. Looking back, we then have some leverage to understand what research programs helped advance our understanding in a relevant way, or were indirectly necessary for the development of practically important techniques.
We probably want something automatable, though. Maybe look at the flow of key words and phrases (ones that grouped papers tend to share with each other and not with other papers) from the literature of pure science to engineering and industry?