How do you identify complex systems?

As I have studied scientific inference over the past decade, there is one major class of problems that frustrates me the most. It’s what I think most people here focus on understanding: How to properly identify a complex system.

We all basically know that complex systems are unpredictable, in certain types of ways, due to incredible differences in outcomes present in small changes in starting parameters. Despite this, there seem to be some complex systems that still follow a certain state or pattern identifiable to a single input. In my field of study (economics) we know that if you increase the money supply inflation follows. Sort of. Lots of people though inflation would follow from quantitative easing following the Great Recession, when it didn’t (in fact, if anything the opposite).

In the graph below you see that the market based expected inflation rate in 5 years shot up right as the economy began to tank, for fear of quantitative easing and low interest rates raising inflation (as they had in the past). In reality, so the retrospective story goes, the velocity of money dropped so heavily this countered any inflationary impacts. Then under a year later the market decided it would probably converge to its mean. This is how market based predictions went in arguably the most well understood part of macro-economics.

(I actually worked on a team at the Fed to try to help do this even better later on. Our team was the Financial Econ team, so we used market based measures instead of trying to build formal structural models. Could we predict inflation expectation dynamics better than a random walk? Yeah, but only just by the skin of our teeth. Plus, to paraphrase Tetlock, t’s a field rewarded more by mastery of impressive tools than for accuracy)

As far as macro-economics goes, there are few things economists more reliably understand than the relationship between money and inflation. Within simple models (and probably in reality) it’s clear that if you increase the monetary supply to infinity, the value is going to drop to zero. I can’t prove it based on historical experiment, but it’s reasonable to use fragments of past information and models to predict that if the government dropped trillions of dollars from helicopters over major U.S. cities, inflation would shoot up.

Despite this, there seem to be complex nonlinearities that mess with our predictions outside of extreme cases. I like this example because it seems to be a clear case of a complex system’s state being determined monotonically by a single parameter in the extremes, yet still being non-predictable within most of our core areas of interest.

In the field of economics my intuition for when to doubt a prediction or model has been refined from years of studying and working in forecasting. It’s nothing really taught in a textbook. The only way I try to rationally understand it is that there is a certain type of dynamics inherent in economic forecasting, across time, space, and problem type. Whether it’s forecasting monetary policy, or revenue for T.V. sales in Spain, the way humans interact with economic systems follows some type of strange similar pattern. Over the years of working in this field I have an idea for this pattern, but I can’t prove it or explain it.

It’s similar to what Tetlock writes about in Expert Political Judgement and Superforecasting. Even in his analysis though (which is incredible) he is only able to identify a few general types of people. What I take from it is that there is something pretty heavy going on in the brain of a ‘hedgehog’ vs. a ‘fog’, which lets some people filter out a signal from reality more reliably. Still, based on his writing it seems more that hedgehog’s fall into a cognitive bias of overconfidence, while Foxes are more humble to the complexity of reality, and more willing to stick to uninformative priors. My guess/​prediction is what makes really great foxes is the ability to switch between knowing when you’re dealing with a complex system, and when the answer really is obvious.

In Expert Political Judgement Tetlock writes “One could be an excellent forecaster who relies on highly intuitive but logically indefensible guesswork.” What is in that guesswork though? There has got to be something going on there, which presents the fact that there are models of the world and systems of thinking that haven’t been formalized, but which could let humans or computers properly classify when a system is too complex, and when there is a predictable state of the complex system.

Back to the original topic—this drives me insane. We are able to personally develop heuristics for evaluating predictions and complex systems, but sharing them with others is really tough.

I’m interested in how others identify complex systems, specifically those related to your fields, and how do you try to communicate them to outsiders?