Inference & Empiricism

Speaking very roughly our best tools for figuring out the truth are inference and empiricism. By inference I mean using things like Math, Logic, and theory in general to conclude new facts from things we assume to be true. By empiricism I mean looking at the world, doing experiments, etc.

Inference tends to work particularly well when you’re highly confident in your premises. Empiricism tends to work particularly well in domains of high uncertainty.

Nothing prevents you from combining the two – for example, my basic applied thought framework is to “run towards uncertainty” – that is, have a theory, identify the points of highest uncertainty in the theory, figure out the smallest experiment/​action to resolve that uncertainty, do it. Basically the scientific method. This is what I call “Risk Driven Development” in the context of programming.

People from highly theoretical degrees tend to struggle with high-uncertainty domains after graduating from school because their go-to tool is pure inference, and inference without empiricism fails in the real world because your assumptions are never 100% true, not even close. (They generally learn empiricism with practice.)

The failure modes of high empiricism without theory are much more subtle. Pure empiricism pretty much always works decently well. Failures look more like “didn’t invent general relativity” – theory tends to gather a small number of large victories. Less commonly, theory lets you avoid a mistake the first time you do something, or more generally learn from fewer examples.

One major point of contention among programmers is how much value you gain from using abstractions that are 95% true vs. 100% true. Programmers who are really good at inference gain a huge advantage from 100% true abstractions. Programmers who aren’t gain a 5% advantage, and thus see it as a huge cost with little benefit. The vast majority of functional programming advocates you meet will be people whose preferred method is inference.

Someone who is strong at inference and weak at empiricism entering a high-uncertainty domain like flirting will often be given advice like “don’t think too much.” This doesn’t usually work: the advice-giver has an “empiricism button”, so to speak, that they can activate in that situation. The pure theorist does not, so simply turning off theory doesn’t teach them empiricism. A more effective approach, at least in my experience, is to develop theories around effectively interacting in those situations, noticing points of high uncertainty and testing them.

More generally, some high-E, low-I people form an implicit (or explicit) belief that theory is actively counterproductive. They will see lots of apparently confirming examples of this, because theory is rarely useful quickly.

Theory is more or less low-status in domains like business, which means that even when successful people attribute their success to theory, those memes will not spread. A great author on theory applied to business is Eliyahu Goldratt.

A common example of theory being useful is when you find a technique that works in one case, and realize it can be significantly generalized. E.g. Agile basically comes from Lean + Theory of Constraints, which were originally developed for factories. Limiting work in progress is so helpful in so many domains it’s almost like cheating.