I think we’re overdue for a general overhaul of “applied epistemic rationality”.
Superforecasting and adjacent skills were, in retrospect, the wrong places to put the bulk of the focus. General epistemic hygiene is a necessary foundational element, but predictive power is only one piece of what makes a model useful. It’s a necessary condition, not a sufficient one.
Personally, I expect/hope that the next generation of applied rationality will be more explicitly centered around gears-level models. The goal of epistemic rationality 2.0 will be, not just a predictively-accurate model, but an accurate gears-level understanding.
I’ve been trying to push in this direction for a few months now. Gears vs Behavior talked about why we want gears-level models rather than generic predictively-powerful models. Gears-Level Models are Capital Investments talked more about the tradeoffs involved. And abunchofposts showed how to build gears-level models in various contexts.
Some differences I expect compared to prediction-focused epistemic rationality:
Much more focus on the object level. A lot of predictive power comes from general outside-view knowledge about biases and uncertainty; gears-level model-building benefits much more from knowing a whole lot about the gears of a very wide variety of systems in the world.
Much more focus on causality, rather than just correlations and extrapolations.
I think we’re overdue for a general overhaul of “applied epistemic rationality”.
Superforecasting and adjacent skills were, in retrospect, the wrong places to put the bulk of the focus. General epistemic hygiene is a necessary foundational element, but predictive power is only one piece of what makes a model useful. It’s a necessary condition, not a sufficient one.
Personally, I expect/hope that the next generation of applied rationality will be more explicitly centered around gears-level models. The goal of epistemic rationality 2.0 will be, not just a predictively-accurate model, but an accurate gears-level understanding.
I’ve been trying to push in this direction for a few months now. Gears vs Behavior talked about why we want gears-level models rather than generic predictively-powerful models. Gears-Level Models are Capital Investments talked more about the tradeoffs involved. And a bunch of posts showed how to build gears-level models in various contexts.
Some differences I expect compared to prediction-focused epistemic rationality:
Much more focus on the object level. A lot of predictive power comes from general outside-view knowledge about biases and uncertainty; gears-level model-building benefits much more from knowing a whole lot about the gears of a very wide variety of systems in the world.
Much more focus on causality, rather than just correlations and extrapolations.
Less outsourcing of knowledge/thinking to experts, but much more effort trying to extract experts’ models, and to figure out where the models came from and how reliable the model-sources are.