I don’t see why the lack of personal responsibility (which I think is a slight exaggeration) is unique to bureaucratic science. There’s no intrinsic reason why journals where, say, the editor glances over a paper themselves and informally shows it to a few friends before deciding whether or not to publish (instead of soliciting formal peer reviews from experts) would be better in this respect.
The contrast I’m pointing out is between a system where each decision and each claim puts the responsible person’s reputation on the line, and a system where decisions are made according to established bureaucratic rules that allow everyone involved to escape any personal responsibility no matter what happens (except if crude malfeasance like data forgery or plagiarism is proven).
Thus, for example, if a junk paper gets published in a journal, this should tarnish the reputation of the both the authors and the editor. Yet, in the present bureaucratic system, the editor can comfortably hide behind the fact that the regular bureaucratic procedure was followed, and even the authors can claim that you can’t really blame them if their false claims sounded convincing enough for the reviewers (who are in turn anonymous and thus completely absolved of any responsibility). If the existing heavily bureaucratized modes of publishing make it difficult to publish criticism (as is often the case), this situation coupled with the usual human tendencies may easily lead to utter corruption covered by an impeccable bureaucratic facade that makes it impossible to put blame on anyone.
It’s certainly possible [that double-blind review works] if the authors aren’t already established workers in the expert’s field. If I submitted an econometrics/ecology/history/statistics paper to an econometrics/ecology/history/statistics journal with real double-blind review, I’d bet a lot of money that the reviewer(s) couldn’t guess who I was! But yes, double-blind (and single-blind) review’s often trivial to get around.
The key problem, however, is that blind review is ultimately another way of eliminating personal responsibility. For the reviewer, there is no incentive whatsoever to do a good job: the work is unpaid, uncredited, and without any repercussions no matter how badly it’s done. On the other hand, considering how tightly-knit specific research communities typically are, the supposed blindness is a farce more often than not.
What I think’s happening here is that you see poor science that’s backed by parts of the establishment, and you’re inferring that because the establishment is bureaucratic, bureaucracy’s to blame for the poor science. But I doubt the chosen social structure is the root cause. I’d expect similar sections of rot in an Einstein-era honour-based system.
Often it’s not about poor science being backed by the establishment for ideological reasons (though this also happens), but merely about the fact that a field can be run by a clique that produces junk science under a veneer of bureaucratic perfection, conscientiously going through all the bureaucratic motions despite the actual substance being worthless (or worse).
But, yes, all sorts of pseudoscience also flourished under the Einstein-era system of honor and reputation. Psychoanalysis is the prime example. The question is whether the subsequent bureaucratization has alleviated or exacerbated these problems. My opinion is that, at best, it hasn’t put any real barriers against pseudoscience, and arguably, it has made things worse by allowing pseudoscience to be given a veneer of respectability (and sources of funding) much more easily.
satt:
The contrast I’m pointing out is between a system where each decision and each claim puts the responsible person’s reputation on the line, and a system where decisions are made according to established bureaucratic rules that allow everyone involved to escape any personal responsibility no matter what happens (except if crude malfeasance like data forgery or plagiarism is proven).
Thus, for example, if a junk paper gets published in a journal, this should tarnish the reputation of the both the authors and the editor. Yet, in the present bureaucratic system, the editor can comfortably hide behind the fact that the regular bureaucratic procedure was followed, and even the authors can claim that you can’t really blame them if their false claims sounded convincing enough for the reviewers (who are in turn anonymous and thus completely absolved of any responsibility). If the existing heavily bureaucratized modes of publishing make it difficult to publish criticism (as is often the case), this situation coupled with the usual human tendencies may easily lead to utter corruption covered by an impeccable bureaucratic facade that makes it impossible to put blame on anyone.
The key problem, however, is that blind review is ultimately another way of eliminating personal responsibility. For the reviewer, there is no incentive whatsoever to do a good job: the work is unpaid, uncredited, and without any repercussions no matter how badly it’s done. On the other hand, considering how tightly-knit specific research communities typically are, the supposed blindness is a farce more often than not.
Often it’s not about poor science being backed by the establishment for ideological reasons (though this also happens), but merely about the fact that a field can be run by a clique that produces junk science under a veneer of bureaucratic perfection, conscientiously going through all the bureaucratic motions despite the actual substance being worthless (or worse).
But, yes, all sorts of pseudoscience also flourished under the Einstein-era system of honor and reputation. Psychoanalysis is the prime example. The question is whether the subsequent bureaucratization has alleviated or exacerbated these problems. My opinion is that, at best, it hasn’t put any real barriers against pseudoscience, and arguably, it has made things worse by allowing pseudoscience to be given a veneer of respectability (and sources of funding) much more easily.