What problem do you have with the two use cases I provide in the post?
If you want to make testable predictions about the future, you need to have good models of the world. To have good models of the world, you often need to learn from the past. As mentioned in the post, this requires you to do retrospective forecasting.
Concrete example: if you’re going to make forecasts about whether there will be a civil war in the United States before the end of the century, you need to reason from models of what causes civil wars to happen. For your models of that to be good, you need to have updated your beliefs based on what you know about past civil wars, which requires you to know how likely they were to occur both under different models of the world and overall, since both probabilities go into Bayesian updating.
This question helped me realize that, if we have a theory that retrospective forecasting works, we can use that theory to make testable predictions, and then we can build up evidence for or against retrospective forecasting.
Suppose we have two models, Model-A and Model-B. The prior is 50%. We also have a history to look at. We can apply retrospective forecasting to determine P(History|ModelA) and P(History|ModelB), and then from Bayes Theorem we can update our estimate as to which model is most likely. Suppose this tells us that Model-A is much more likely.
Now we can use ModelA and ModelB to make testable predictions about the future. As the future unfolds, if events occur as predicted by ModelA, this is evidence that retrospective forecasting works. If events occur as predicted by ModelB, this is evidence that retrospective forecasting doesn’t work.
What is the use of retrospective forecasting unless you can come up with testable predictions?
What problem do you have with the two use cases I provide in the post?
If you want to make testable predictions about the future, you need to have good models of the world. To have good models of the world, you often need to learn from the past. As mentioned in the post, this requires you to do retrospective forecasting.
Concrete example: if you’re going to make forecasts about whether there will be a civil war in the United States before the end of the century, you need to reason from models of what causes civil wars to happen. For your models of that to be good, you need to have updated your beliefs based on what you know about past civil wars, which requires you to know how likely they were to occur both under different models of the world and overall, since both probabilities go into Bayesian updating.
This question helped me realize that, if we have a theory that retrospective forecasting works, we can use that theory to make testable predictions, and then we can build up evidence for or against retrospective forecasting.
Suppose we have two models, Model-A and Model-B. The prior is 50%. We also have a history to look at. We can apply retrospective forecasting to determine P(History|ModelA) and P(History|ModelB), and then from Bayes Theorem we can update our estimate as to which model is most likely. Suppose this tells us that Model-A is much more likely.
Now we can use ModelA and ModelB to make testable predictions about the future. As the future unfolds, if events occur as predicted by ModelA, this is evidence that retrospective forecasting works. If events occur as predicted by ModelB, this is evidence that retrospective forecasting doesn’t work.