1) How much weight you should give to the views of academics in that area, e.g., if some claim is accepted by the mainstream establishment (or conversely viewed as a valid point of disagreement) how much should that information affect your own probability judgement.
2) How much progress/how useful is the academic discipline in question. Does it require reform.
Your arguments in the first part are only relevant to #2. The programming language research community may be mirred in hopeless mathematical jealousy as they create more and more arcane type systems while ignoring the fact that ultimately programming language design is an entirely psychological question. The languages are all Turing complete and most offer the same functionality in some form the only question is one of human useability and the community doesn’t seem very interested in checking what sorts of type systems or development environments really are empirically more productive. Maybe physics is stuck and can no longer make any real progress.
Nevertheless this has no bearing on how I should treat the evidence that 99% of physics professors predict experiment X will have outcome Y. Indeed, the argument that physics is stuck is largely that they have been so successful in explaining any easily testable phenomena it is difficult to make further progress. Similarly if I see that the programming language research people say that type system Blah is undecidable I will take that evidence seriously even if it doesn’t turn out to be that useful.
(Frankly I think the harsh on CS is a bit unfair. Academia by it’s nature is conservative and driven by pure research. We don’t yet know whether their work will turn out to be useful down the road since CS is such a young discipline while at the same time many people do work in both practical and theoretical areas.)
I think #1 is the more interesting question. Here I would say the primary test should be whether or not disputes eventually produce consensus or not. That is does the discipline build up a store of accepted fact and move on to new issues (with occasional Kuhnian style paradigm shifts) or does it simply stay mired in the same issues without generating conclusions.
Pardon I didn’t notice your comment earlier—unfortunately, you don’t get notices when someone replies to top-level articles as it’s done for replies to comments.
The difference you have in mind is basically the same as what I meant when I wrote about areas that are infested with a lot of bullshit work, but still fundamentally sound. Clearly CS people are smart and possess huge practically useful knowledge and skills—after all, it’s easy for anyone who works in CS research in an institution of any prominence to get a lucrative industry job working on very concrete, no-nonsense, and profitable projects. The foundations of the field are therefore clearly sound and useful.
This however still doesn’t mean that there aren’t entire bullshit subfields of CS, where a vast research literature is produced on things that are a clear dead-end (or aimed at entirely dreamed-up problems) while everyone pretends and loudly agrees that great contributions are being made. In such cases, the views expressed by the experts are seriously distant from reality, and it would be horribly mistaken to make important decisions by taking them at face value. People who work on such things are of course still capable of earning money doing useful work in industry, but that’s only because the sort of bullshit that they have to produce must be sophisticated enough and in conformity with complex formal rules, so in order to produce the right sort of bullshit, you still need great intellectual ability and lots of useful skills.
You may be right that I should have perhaps made a stronger contrast between such fields and those that are rotten to the bottom.
You confuse two very different issues.
1) How much weight you should give to the views of academics in that area, e.g., if some claim is accepted by the mainstream establishment (or conversely viewed as a valid point of disagreement) how much should that information affect your own probability judgement.
2) How much progress/how useful is the academic discipline in question. Does it require reform.
Your arguments in the first part are only relevant to #2. The programming language research community may be mirred in hopeless mathematical jealousy as they create more and more arcane type systems while ignoring the fact that ultimately programming language design is an entirely psychological question. The languages are all Turing complete and most offer the same functionality in some form the only question is one of human useability and the community doesn’t seem very interested in checking what sorts of type systems or development environments really are empirically more productive. Maybe physics is stuck and can no longer make any real progress.
Nevertheless this has no bearing on how I should treat the evidence that 99% of physics professors predict experiment X will have outcome Y. Indeed, the argument that physics is stuck is largely that they have been so successful in explaining any easily testable phenomena it is difficult to make further progress. Similarly if I see that the programming language research people say that type system Blah is undecidable I will take that evidence seriously even if it doesn’t turn out to be that useful.
(Frankly I think the harsh on CS is a bit unfair. Academia by it’s nature is conservative and driven by pure research. We don’t yet know whether their work will turn out to be useful down the road since CS is such a young discipline while at the same time many people do work in both practical and theoretical areas.)
I think #1 is the more interesting question. Here I would say the primary test should be whether or not disputes eventually produce consensus or not. That is does the discipline build up a store of accepted fact and move on to new issues (with occasional Kuhnian style paradigm shifts) or does it simply stay mired in the same issues without generating conclusions.
Pardon I didn’t notice your comment earlier—unfortunately, you don’t get notices when someone replies to top-level articles as it’s done for replies to comments.
The difference you have in mind is basically the same as what I meant when I wrote about areas that are infested with a lot of bullshit work, but still fundamentally sound. Clearly CS people are smart and possess huge practically useful knowledge and skills—after all, it’s easy for anyone who works in CS research in an institution of any prominence to get a lucrative industry job working on very concrete, no-nonsense, and profitable projects. The foundations of the field are therefore clearly sound and useful.
This however still doesn’t mean that there aren’t entire bullshit subfields of CS, where a vast research literature is produced on things that are a clear dead-end (or aimed at entirely dreamed-up problems) while everyone pretends and loudly agrees that great contributions are being made. In such cases, the views expressed by the experts are seriously distant from reality, and it would be horribly mistaken to make important decisions by taking them at face value. People who work on such things are of course still capable of earning money doing useful work in industry, but that’s only because the sort of bullshit that they have to produce must be sophisticated enough and in conformity with complex formal rules, so in order to produce the right sort of bullshit, you still need great intellectual ability and lots of useful skills.
You may be right that I should have perhaps made a stronger contrast between such fields and those that are rotten to the bottom.