Forecasting or Predicting is the act of making statements about what will happen in the future (and in some cases, the past) and then scoring the predictions. Posts marked with this tag are for discussion of the practice, skill, and methodology of forecasting. Posts exclusively containing object-level lists of forecasts and predictions are in Forecasts. Related: Betting.
Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. – Making Beliefs Pay Rent)
Forecasting allows individuals and institutions to test their internal models of reality. A good forecaster can have confidence in future predictions and hence actions in the same area as they have a good track record in. Organisations with decision-makers with good track records can likewise be more confident in their choices.
Forecasting is hard but many top forecasters use common techniques. This suggests that forecasting is a skill that can be learnt and practised.
Suppose we are trying to find the probability that an event will occur within the next 5 years. One good place to start is by asking “of all similar time periods, what fraction of the time does this event occur?”. This is the base rate.
If we want to know the probability that Joe Biden is President of the United States on Nov. 1st, 2024, we could ask
What fraction of presidential terms are fully completed (last all 4 years)? The answer to this is 49 out of the 58 total terms, or around 84%.
On the other hand, we know that Biden has already made it through 288 days of his term. If we remove the 5 presidents who left office before that, there are 49 out of 53 or around 92%.
But alternately, Joe Biden is pretty old (78 to be exact). If we look up death rate per year in actuarial tables, it’s around 5.1% per year, so this leaves him with a ~15% chance of death or a 85% chance of surviving his term.
These are all examples of using base rates. [These examples are taken from Base Rates and Reference Classes by jsteinhardt.]
Base rates represent the outside view for a given question. They are a good place to start but can often be improved on by updating the probability according to an inside view.
Note that there are often several reference classes we could use, each implying a different base rate. The problem of deciding which class to use is known as the reference class problem.
A forecaster is said to be calibrated if the events they say have a X% chance of happening, happen X% of the time.
Most people are overconfident. When they say an event has a 99% chance of happening, often the events happen much less frequently than that.
This natural overconfidence can be corrected with calibration training. In calibration training, you are asked to answer a set of factual questions, assigning a probability to each of your answers.
A list of calibration training exercises can be found here.
Much like Fermi estimation, questions about future events can often be decomposed into many different questions, these questions can be answered, and the answers to these questions can be used to reconstruct an answer to the original question.
Suppose you are interested in whether AI will cause a catastrophe by 2100. For AI to cause such an event, several things need to be true: (1) it needs to be possible to build advanced AI with agentic planning and strategic awareness by 2100, (2) there need to be strong incentives to apply such a system, (3) it needs to be difficult to align such a system should it be deployed, (4) a deployed and unaligned AI would act in unintended and high-impact power seeking ways causing trillions of dollars in damage, (5) of these consequences will result in the permanent disempowerment of all humanity and (6) this disempowerment will constitute an existential catastrophe. Taking the probabilities that Eli Lifland assigned to each question gives a 80%, 85%, 75%, 90%, 80% and 95% chance of events 1 through 6 respectively. Since each event is conditional on the ones before it, we can find the probability of the original question by multiplying all the probabilities together. This gives Eli Lifland a probability of existential risk from misaligned AI before 2100 to be approximately 35%. For more detail see Eli’s original post here.
Decomposing questions into their constituent parts, assigning probabilities to these sub-questions, and combining these probabilities to answer the original questions is believed to improve forecasts. This is because, while each forecast is noisy, combining the estimates from many questions cancels the noise and leaves us with the signal.
Question decomposition is also good at increasing epistemic legibility. It helps forecasters to communicate to others why they’ve made the forecast that they did and it allows them to identify their specific points of disagreement.
A premortem is a strategy used once you’ve assigned a probability to an event. You ask yourself to imagine that the forecast was wrong and you then work backwards to determine what could potentially have caused this.
It is simply a way to reframe the question “in what ways might I be wrong?” but in a way that reduces motivated reasoning caused by attachment to the bottom line.
While the above techniques are useful, they are no substitute for actually making predictions. Get out there and make predictions! Use the above techniques. Keep track of your predictions. Periodically evaluate questions that have been resolved and review your performance. Assess the degree to which you are calibrated. Look out for systematic mistakes that you might be making. Make more predictions! Over time, like with any skill, your ability can and should improve.
Other resources include:
Superforcasting by Philip Tetlock and Dan Gardener
Intro to Forecasting by Alex Lawson
Forecasting Newsletter by Nuño Sempere
State of the Art
For many years there have been calls to apply forecasting techniques to non-academic domains including journalism, policy, investing and business strategy. Several organisations now exist within these niche.
Metaculus is a popular and established web platform for forecasting. Their questions mainly focus on geopolitics, the coronavirus pandemic and topics of interest to Effective Altruism.
They host prediction competitions with real money prizes and collect and track public predictions made by various figures.
Cultivate Labs build tools that companies can use to crowdsource information from among their employees. This helps leadership to understand the consensus of people working on the ground and use this to improve the decisions they make.
Kalshi provide real money prediction markets on geopolitical events. The financial options they provide are intended to be used as hedges for political risk.
Manifold.Markets is a prediction market platform that uses play money. It is noteworthy for its ease of use, great UI and the fact that the market creator decides how the market resolves.
QURI is a research organisation that builds tools that make it easier to make good forecasts. Their most notable tool is Squiggle—a programming language designed to be used to make legible forecasts in a wide range of contexts.