Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields

(This post is an expanded version of a LW comment I left a while ago. I have found myself referring to it so much in the meantime that I think it’s worth reworking into a proper post. Some related posts are The Correct Contrarian Cluster” and What is Bunk?”)

When looking for information about some area outside of one’s expertise, it is usually a good idea to first ask what academic scholarship has to say on the subject. In many areas, there is no need to look elsewhere for answers: respectable academic authors are the richest and most reliable source of information, and people claiming things completely outside the academic mainstream are almost certain to be crackpots.

The trouble is, this is not always the case. Even those whose view of the modern academia is much rosier than mine should agree that it would be astonishing if there didn’t exist at least some areas where the academic mainstream is detached from reality on important issues, while much more accurate views are scorned as kooky (or would be if they were heard at all). Therefore, depending on the area, the fact that a view is way out of the academic mainstream may imply that it’s bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.

I will discuss some heuristics that, in my experience, provide a realistic first estimate of how sound the academic mainstream in a given field is likely to be, and how justified one would be to dismiss contrarians out of hand. These conclusions have come from my own observations of research literature in various fields and some personal experience with the way modern academia operates, and I would be interested in reading others’ opinions.

Low-hanging fruit heuristic

As the first heuristic, we should ask if there is a lot of low-hanging fruit available in the given area, in the sense of research goals that are both interesting and doable. If yes, this means that there are clear paths to quality work open for reasonably smart people with an adequate level of knowledge and resources, which makes it unnecessary to invent clever-looking nonsense instead. In this situation, smart and capable people can just state a sound and honest plan of work on their grant applications and proceed with it.

In contrast, if a research area has reached a dead end and further progress is impossible except perhaps if some extraordinary path-breaking genius shows the way, or in an area that has never even had a viable and sound approach to begin with, it’s unrealistic to expect that members of the academic establishment will openly admit this situation and decide it’s time for a career change. What will likely happen instead is that they’ll continue producing output that will have all the superficial trappings of science and sound scholarship, but will in fact be increasingly pointless and detached from reality.

Arguably, some areas of theoretical physics have reached this state, if we are to trust the critics like Lee Smolin. I am not a physicist, and I cannot judge directly if Smolin and the other similar critics are right, but some powerful evidence for this came several years ago in the form of the Bogdanoff affair, which demonstrated that highly credentialed physicists in some areas can find it difficult, perhaps even impossible, to distinguish sound work from a well-contrived nonsensical imitation. [1]

Somewhat surprisingly, another example is presented by some subfields of computer science. With all the new computer gadgets everywhere, one would think that no other field could be further from a stale dead end. In some of its subfields this is definitely true, but in others, much of what is studied is based on decades old major breakthroughs, and the known viable directions from there have long since been explored all until they hit against some fundamentally intractable problem. (Or alternatively, further progress is a matter of hands-on engineering practice that doesn’t lend itself to the way academia operates.) This has led to a situation where a lot of the published CS research is increasingly distant from reality, because to keep the illusion of progress, it must pretend to solve problems that are basically known to be impossible. [2]

Ideological/​venal interest heuristic

Bad as they might be, the problems that occur when clear research directions are lacking pale in comparison with what happens when things under discussion are ideologically charged or a matter in which powerful interest groups have a stake. As Hobbes remarked, people agree about theorems of geometry not because their proofs are solid, but because “men care not in that subject what be truth, as a thing that crosses no man’s ambition, profit, or lust.” [3]

One example is the cluster of research areas encompassing intelligence research, sociobiology, and behavioral genetics, which touches on a lot of highly ideologically charged questions. These pass the low-hanging fruit heuristic easily: the existing literature is full of proposals for interesting studies waiting to be done. Yet, because of their striking ideological implications, these areas are full of work clearly aimed at advancing the authors’ non-scientific agenda, and even after a lot of reading one is left in confusion over whom to believe, if anyone. It doesn’t even matter whose side one supports in these controversies: whichever side is right (if any one is), it’s simply impossible that there isn’t a whole lot of nonsense published in prestigious academic venues and under august academic titles.

Yet another academic area that suffers from the same problems is the history of the modern era. On many significant events from the last two centuries, there is a great deal of documentary evidence laying around still waiting to be assessed properly, so there is certainly no lack of low-hanging fruit for a smart and diligent historian. Yet due to the clear ideological implications of many historical topics, ideological nonsense cleverly masquerading as scholarship abounds. I don’t think anything resembling an accurate world history of the last two centuries could be written without making a great many contrarian claims. [4] In contrast, on topics that don’t arouse ideological passions, modern histories are often amazingly well researched and free of speculation and distortion. (In particular, if you are from a small nation that has never really been a player in world history, your local historians are likely to be full of parochial bias motivated by the local political quarrels and grievances, but you may be able to find very accurate information on your local history in the works of foreign historians from the elite academia.)

On the whole, it seems to me that failing the ideological interest test suggests a much worse situation than failing the low-hanging fruit test. The areas affected by just the latter are still fundamentally sound, and tend to produce work whose contribution is way overblown, but which is still built on a sound basis and internally coherent. Even if outright nonsense is produced, it’s still clearly distinguishable with some effort and usually restricted to less prestigious authors. Areas affected by ideological biases, however, tend to drift much further into outright delusion, possibly lacking a sound core body of scholarship altogether.

[Paragraphs below added in response to comments:]

What about the problem of purely venal influences, i.e. the cases where researchers are under the patronage of parties that have stakes in the results of their research? On the whole, the modern Western academic system is very good at discovering and stamping out clear and obvious corruption and fraud. It’s clearly not possible for researchers to openly sell their services to the highest bidder; even if there are no formal sanctions, their reputation would be ruined. However, venal influences are nevertheless far from nonexistent, and a fascinating question is under what exact conditions researchers are likely to fall under them and get away with it.

Sometimes venal influences are masked by scams such as setting up phony front organizations for funding, but even that tends to be discovered eventually and tarnish the reputations of the researchers involved. What seems to be the real problem is when the beneficiaries of biased research enjoy such status in the eyes of the public and such legal and customary position in society that they don’t even need to hide anything when establishing a perverse symbiosis that results in biased research. Such relationships, while fundamentally representing venal interest, are in fact often boasted about as beneficial and productive cooperation. Pharmaceutical research is an often cited example, but I think the phenomenon is in fact far more widespread, and reaches the height of perverse perfection in those research communities whose structure effectively blends into various government agencies.

The really bad cases: failing both tests

So far, I’ve discussed examples where one of the mentioned heuristics returns a negative answer, but not the other. What happens when a field fails both of them, having no clear research directions and at the same time being highly relevant to ideologues and interest groups? Unsurprisingly, it tends to be really bad.

The clearest example of such a field is probably economics, particularly macroeconomics. (Microeconomics covers an extremely broad range of issues deeply intertwined with many other fields, and its soundness, in my opinion, varies greatly depending on the subject, so I’ll avoid a lengthy digression into it.) Macroeconomists lack any clearly sound and fruitful approach to the problems they wish to study, and any conclusion they might draw will have immediately obvious ideological implications, often expressible in stark “who-whom?” terms.

And indeed, even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world [5], experts with the most prestigious credentials dismissing each other as crackpots (in more or less diplomatic terms) when their favored ideologies clash, etc., etc. Fringe contrarians in this area (most notably extreme Austrians) typically have silly enough ideas of their own, but their criticism of the academic mainstream is nevertheless often spot-on, in my opinion.

Other examples

So, what are some other interesting case studies for these heuristics?

An example of great interest is climate science. Clearly, the ideological interest heuristic raises a big red flag here, and indeed, there is little doubt that a lot of the research coming out in recent years that supposedly links “climate change” with all kinds of bad things is just fashionable nonsense [6]. (Another sanity check it fails is that only a tiny proportion of these authors ever hypothesize that the predicted/​observed climate change might actually improve something, as if there existed some law of physics prohibiting it.) Thus, I’d say that contrarians on this issue should definitely not be dismissed out of hand; the really hard question is how much sound insight (if any) remains after one eliminates all the nonsense that’s infiltrated the mainstream. When it comes to the low-hanging fruit heuristic, I find the situation less clear. How difficult is it to achieve progress in accurately reconstructing long-term climate trends and forecasting the influences of increasing greenhouse gases? Is it hard enough that we’d expect, even absent an ideological motivation, that people would try to substitute cleverly contrived bunk for unreachable sound insight? My conclusion is that I’ll have to read much more on the technical background of these subjects before I can form any reliable opinion on these questions.

Another example of practical interest is nutrition. Here ideological influences aren’t very strong (though not altogether absent either). However, the low-hanging fruit raises a huge red flag: it’s almost impossible to study these things in a sound way, controlling for all the incredibly complex and counterintuitive confounding variables. At the same time, it’s easy to produce endless amounts of plausible-looking junk studies. Thus, I’d expect that the mainstream research in this area is on average pure nonsense, with a few possible gems of solid insight hopelessly buried under it, and even when it comes to very extreme contrarians, I wouldn’t be tremendously surprised to see any one of them proven right at the end. My conclusion is similar when it comes to exercise and numerous other lifestyle issues.

Exceptions

Finally, what are the evident exceptions to these trends?

I can think of some exceptions to the low-hanging fruit heuristic. One is in historical linguistics, whose standard well-substantiated methods have had great success in identifying the structure of the world’s language family trees, but give no answer at all to the fascinating question of how far back into the past the nodes of these trees reach (except of course when we have written evidence). Nobody has any good idea how to make progress there, and the questions are tantalizing. Now, there are all sorts of plausible-looking but fundamentally unsound methods that purport to answer these questions, and papers using them occasionally get published in prestigious non-linguistic journals, but the actual historical linguists firmly dismiss them as unsound, even though they have no answers of their own to offer instead. [7] It’s an example of a commendable stand against seductive nonsense.

It’s much harder to think of examples where the ideological interest heuristic fails. What field can one point out where mainstream scholarship is reliably sound and objective despite its topic being ideologically charged? Honestly, I can’t think of one.

What about the other direction—fields that pass both heuristics but are nevertheless nonsense? I can think of e.g. artsy areas that don’t make much of a pretense to objectivity in the first place, but otherwise, it seems to me that absent ideological and venal perverse incentives, and given clear paths to progress that don’t require extraordinary genius, the modern academic system is great in producing solid and reliable insight. The trouble is that these conditions often don’t hold in practice.

I’d be curious to see additional examples that either confirm of disprove these heuristics I proposed.

Footnotes

[1] Commenter gwern has argued that the Bogdanoff affair is not a good example, claiming that the brothers have been shown as fraud decisively after they came under intense public scrutiny. However, even if this is true, the fact still remains that they initially managed to publish their work in reputable peer-reviewed venues and obtain doctorates at a reputable (though not top-ranking) university, which strongly suggests that there is much more work in the field that is equally bad but doesn’t elicit equal public interest and thus never gets really scrutinized. Moreover, from my own reading about the affair, it was clear that in its initial phases several credentialed physicists were unable to make a clear judgment about their work. On the whole, I don’t think the affair can be dismissed as an insignificant accident.

[2] Moldbug’s “What’s wrong with CS research” is a witty and essentially accurate overview of this situation. He mostly limits himself to the discussion of programming language research, but a similar scenario can be seen in some other related fields too.

[3] Thomas Hobbes, Leviathan, Chapter XI.

[4] I have the impression that LW readers would mostly not be interested in a detailed discussion of the topics where I think one should read contrarian history, so I’m skipping it. In case I’m wrong, please feel free to open the issue in the comments.

[5] Oskar Morgenstern’s On the Accuracy of Economic Observations is a tour de force on the subject, demonstrating the essential meaninglessness of many sorts of numbers that economists use routinely. (Many thanks to the commenter realitygrill for directing me to this amazing book.) Morgenstern is of course far too prestigious a name to dismiss as a crackpot, so economists appear to have chosen to simply ignore the questions he raised, and his book has been languishing in obscurity and out of print for decades. It is available for download though (warning: ~31MB PDF).

[6] Some amusing lists of examples have been posted by the Heritage Foundation and the Number Watch (not intended to endorse the rest of the stuff on these websites). Admittedly, a lot of the stuff listed there is not real published research, but rather just people’s media statements. Still, there’s no shortage of similar things even in published research either, as a search of e.g. Google Scholar will show.

[7] Here is, for example, the linguist Bill Poser dismissing one such paper published in Nature a few years ago.