Positive vs. Normative Rationality

[Epistemic Status: Highly speculative/​experimental. Playing around with ideas on rationality. Extremely interested in input, criticism, or other perspectives]

I.)

Throughout the 20th century the University of Chicago school of thought seems to have generated the most positive rational analysis, which branched out across the social sciences among economist types. These guys took a positive view as a given. Their research interests were in studying how these agents would form equilibrium, or lead to suboptimal outcomes in politics or markets, rather than commenting on what an individual ought to do.

While their analysis was based on positive rationality, these guys clearly had preferences for how the world should run that made its way into their analysis. And these preferences rest gently upon the view that humans are capable of identifying, solving, and building themselves a more perfect world. Philosopher of Science, Karl Popper, thought it was a disturbing fact that even the most abstract fields seemed to be ‘motivated and unconsciously inspired by political hopes and by Utopian dreams.’

I remember during my grad degree talking to my British Political Game Theory professor during his office hours. I wanted to know how he built his models. They were theoretical and abstract models, but creating them required inspiration from reading books, or the news, or staring out your window at the sky. Wouldn’t that make them empirical then? I know people get annoyed at logical positivists saying “Sure, sure, those chemical bond equations work today but can you prove they will work tomorrow?” Unfortunately, I don’t think political game theory has this same predictive grasp on reality to dismiss those concerns are boring.

It was a few years later when I was studying Neural Nets, that I began thinking back towards game theory. If all humans were glorified computers, then game theory modelling meant using our brains to capture information from reality and mapping out the structure of a game and the preferences of an agent. Our brains were the estimator in this model, which meant all game theory meant to model reality was estimated. It was just estimated using our neural network. It seems trivial when I write it out, but the current methods for evaluating these types of models still seem to be based on a non-predictive intuition.

When I revisited the Chicago Economists I could see their estimation, it was implied in everything they wrote. Their models all embedded in them a view that communism, fascism, and paternalism aren’t just morally bad, but are by definition utility destroying and irrational. Books like “Capitalism and Freedom” and “The Road to Serfdom” by Friedman and Hayek, took that positive rational framework and applied it to human interaction. (Books and Economists I still deeply respect)

Gary Becker even presented a famous argument on racial discrimination, which showed how competitive markets would tend towards a non-discriminatory equilibrium. And, look this might be the true model of reality in all its parsimony, but as a counter-factual could you imagine this positive view of rationality receiving any acceptance If Becker argued that different tiers of races was the true equilibrium? At the very least there is a predictive complexity here that is missing (not to dismiss Becker’s research, it’s great).

And while purely theoretical utility models and pure mathematical game theory sort of avoid this, the moment you bring it back down to earth to inform reality, you’re estimating the model. Only this model doesn’t have data to fit it to information from reality mapped to numbers.

For all the difficulties, these guys are at least still taking measurement issues seriously. While a truly positive view of rationality might not exist in reality, aspiring towards one imposes a structure on their work. This structure requires they decompose their observations thoughtfully, map out the most important aspects (preferences, utilities, institutions, games), and use mathematical equilibrium refinements.

II.)

One question that keeps me up at night though is under what conditions should we call rationality positive instead of normative? I’ve never been able to get a handle on a clear distinction. I think it’s an intractable problem, and not one that can be solved with more philosophical classifications and words like ‘deontology.’ Despite not being a perfect model, they are still useful distinctions. EY’s sequences outline a robust view of positive rationality. Thomas Carlyle’s neoreactionary blog posts outline a clear normative view of the world. But what about guys like Paul Krugman or Peter Thiel’s views on the world? Are they based on positive analysis? Or do we write them off as normative? (And if we did, what would that mean?)

This is my guess to explain a small part of this discrepancy:

The Normative tends to fixate less on the imposed scientific structure the positivists aspire towards. Or focuses a little less on mapping that situations to clear, well defined, structures. That’s all. There is no discrete shift, no jump to a new dimension of analysis, no regime shift. It’s the same thing done in a way we consider less rigorous. I wish this would imply it is all strictly worse, less predictive, and can be clearly identified. That would make life way easier, but that’s not necessarily true.

We all have a brain-state. It’s based on our brain structure, how it’s been programmed, the information we have observed. We then use that brain-state to run a simulation of society. As of now we don’t have sophisticated science to code up these simulations. We will someday.

As an example, we can simulate a prisoner’s dilemma pretty easily in our own brain. Up until a certain point you can solve most game theory problems by guessing what you would do in the game. For me it was the strangest feeling knowing the answer to a game, but being unable to prove formally why it was the right answer. Eventually this strategy stops working for complex equilibrium refinements.

We also know that unsupervised neural networks can find nonlinear dynamics in any set of information. We know that words are best modeled by neural nets. And as far as positive economics goes, while I haven’t tried exhaustively, most games and strategies that are explained mathematically can also be explained using words. We also seem to have evolved to be much better at absorbing vast amounts of information through words as opposed to mathematical models—even if they make us subject to far more information transfer problems and biases.

What this means is that it is plausible, if not trivially true, that the right words strung together could portray a much more predictive and accurate model than the positivist emphasis on well defined structure. Even the smartest economists and social scientists seem to take to their blog now and again to explain, in words, how the world actually works and how it should work.

Everything Marx wrote is in a sense an incredibly complex simulation, which no human could follow if it was mapped into a mathematical structure. It seems hard to believe that a human could so simply observe the world and then, in their own brain, simulate an entirely different alternative.

(It would be a weird simulation, since if he left out a few crucial words the whole thing could crash, or be nonsensical. Or maybe the words he chose could imply multiple different simulations, some of which make sense and some which don’t. I find thinking about it this way makes a lot of sense in explaining how a few books can be debated for centuries. It’s not because they are brilliant, but because they aren’t well defined and are highly variant to reader assumptions.)

The fact that our brains can attempt to estimate problems, which to properly test would require insane scifi counterfactual worlds, is pretty cool. My own personal theory, which I also can’t prove, is that humans find this deeply unsatisfying. We hate this uncertainty, and would rather belong to a tribe advocating for a certain utopia.

III.)

Eventually the dark side seems to pull in some rationalists as they search for the Holy Grail. I bet we have all felt that pull in some form. We all have to make a choice on how strong we allow for our views of the most optimal world. What do we think humans are capable of achieving? The more sophisticated view of an optimal society you form, the farther you walk away from what seems like a clear predictive and positivist view.

What we call positive rationality anchors itself more on structure. For trivial structures, like testing cognitive biases in a lab using a counter-factual framework, it’s robust. In social science fields it can start to blur the farther you walk from counter-factual science. The more you rely on words to map your argument, the more dimensions and variance you add to your positive argument. The more dimensions in your simulation of the world, the harder it becomes to meaningfully test, while also allowing consideration of far more information.

Once this simulated argument is sufficiently high-dimensional, complex, and based on fragments of information absorbed throughout a humans life, we start to call it ‘normative,’ because we can’t meaningfully find a way to map the information to a structure to easily communicate and explain our estimation to one another.