Assuming Nails

Tangential followup to Defeating Ugh Fields in Practice.
Somewhat related to Privileging the Hypothesis.

Edited to add:
I’m surprised by negative/​neutral reviews. This means that either I’m simply wrong about what counts as interesting, or I haven’t expressed my point very well. Based on commenter response, I think the problem is the latter. In the next week or so, expect a much more concise version of this post that expresses my point about epistemology without the detour through a criticism of economics.

At the beginning of my last post, I was rather uncharitable to neoclassical economics:

If I had to choose a single piece of evidence off of which to argue that the rationality assumption of neoclassical economics is totally, irretrievably incorrect, it’s this article about financial incentives and medication compliance.… [to maintain that this theory is correct] is to crush reality into a theory that cannot hold it.

Some mistook this to mean that I believe neoclassical economists honestly, explicitly believe that all people are always totally rational. But, to quote Rick Moranis, “It’s not what you think. It’s far, far worse.” The problem is that they often take the complex framework of neoclassical economics and believe that a valid deduction within this framework is a valid deduction about the real world. However, deductions within any given framework are entirely uninformative unless the framework corresponds to reality. But, because such deductions are internally valid, we often give them far more weight than they are due. Testing the fit of a theoretical framework to reality is hard, but a valid deduction within a framework feels so very satisfying. But even if you have a fantastically engineered hammer, you cannot go around assuming everything you want to use it on is a nail. It is all too common for experts to assume that their framework applies cleanly to the real world simply because it works so well in its own world.

If this concept doesn’t make perfect sense, that’s what the rest of this post is about: spelling out exactly how we go wrong when we misuse the essentially circular models of many sciences, and how this matters. We will begin with the one discipline in which this problem does not occur. The one discipline which appears immune to this type of problem is mathematics, the paragon of “pure” academic disciplines. This is principally because mathematics appears to have perfect conformity with reality, with no research or experimentation needed to ensure said conformity. The entire system of mathematics exists, in a sense, in its own world. You could sit in windowless room (perhaps one with a supercomputer) and, theoretically, derive every major theorem of mathematics, given the proper axioms. The answer to the most difficult unsolved problems in mathematics was determined the moment the terms and operators within them were defined—once you say a “circle” is “a convex polygon with every point equidistant from a center,” you have already determined every single digit of pi. The problem is finding out exactly how this model works—making calculations and deductions within this model. In the case of mathematics, for whatever reason, the model conforms perfectly to the real world, so any valid mathematical deduction is a valid deduction in the real world.

This is not the case in any true science, which by necessity must rely on experiment and observation. Every science operates off of some simplified model of the world, at least with our current state of knowledge. This creates two avenues of progress: discoveries within the model, which allow one to make predictions about the world, and refinements of the model, which make such predictions more accurate. If we have an internally consistent framework, theoretical manipulation within our model will never show us our error, because our model is circular and functions outside the real world. It would be like trying to predict a stock market crash by analyzing the rules of Monopoly, except that it doesn’t feel absurd. There’s nothing wrong with the model qua the model, the problem is with the model qua reality, and we have to look at both of them to figure that out.

Economics is one of the fields that most suffers from this problem. Our mathematician in his windowless room could generate models of international exchange rates without ever having seen currency, once we gave him the appropriate definitions and assumptions. However, when we try using these models to forecast the future, life gets complicated. No amount of experimenting within our original model will fix this without looking at the real world. At best, we come up with some equations that appear to conform to what we observe, but we run the risk that the correspondence is incidental or that there were some (temporarily) constant variables we left out that will suddenly cease to be constant and break the whole model. It is all too easy to forget that the tremendous rigor and certainty we feel when we solve the equations of our model does not translate into the real world. Getting the “right” answer within the model is not the same thing as getting the real answer.

As an obvious practical example, an individual with a serious excess of free time could develop a model of economics which assumes that agents are rational paper-clip maximizers—that agents are rational and their ultimate concern is maximizing the number of existing paper-clips. Given even more free time and a certain amount of genius, you could even model the behaviour of irrational paper-clip maximizers, so long as you had a definition of irrational. But however refined these models are, they models will remain entirely useless unless you actually have some paper-clip maximizers whose behaviour you want to predict. And even then, you would need to evaluate your predictions after they succeed or fail. Developing a great hammer is relatively useless if the thing you need to make must be put together with screws.

There is an obvious difference in the magnitude of this problem between the sciences, and it seems to be based on the difficulty of experimenting within them. In harder sciences where experiments are fairly straightforwards, like physics and chemistry, it is not terribly difficult to make models that conform well with reality. The bleeding edge of, say, physics, tends to like in areas that are either extremely hard to observe, like the subatomic, or extremely computation-intensive. In softer sciences, experiments are very difficult, and our models rely much more on powerful assumptions, social values, and armchair reasoning.

As humans, we are both bound and compelled to use the tools we have at our disposal. The problem here is one of uncertainty. We know that most of our assumptions in economics are empirically off, but we don’t know how wrong or how much that matters when we make predictions. But the model nevertheless seeps into the very core of our model of reality itself. We cannot feel this disconnect when we try to make predictions; a well-designed model feels so complete that there is no feeling of error when we try to apply it. This is likely because we are applying it correctly, but it just doesn’t apply to reality. This leads people to have high degrees of certainty and yet frequently be wrong. It would not surprise me if the failure of many experts to appreciate the model-reality gap is responsible for a large proportion of incorrect predictions.

This, unfortunately, is not the end of the problem. It gets much worse when you add a normative element into your model, when you get to call some things, “efficient” or “healthful,” or “normal,” or “insane.” There is also a serious question as to whether this false certainty is preferable to the vague unfalsifiability of even softer social sciences. But I shall save these subjects for future posts.