If we collect evidence about the world, and build a model using unbiased means, and predict the consequences of our possible actions: the only way to do better requires that someone else build a model, openly share how they did it, and show it has better predictions.
Paragraph 2 is being a rational agent and it also includes that 2 rational agents cannot agree to disagree if they share priors.
What this means for real world:
It means if there is a discussion about what to do, while we can never be certain, you cannot do better than the action predicted by a model without advancing a testably better model of your own. This applies to countless political questions.
It means that experts are often wrong, as it is irrelevant how many decades of experience or what position they hold. All that matters is what do they know and how do they know it.
I think the ‘metacognition’ part is a subset of it, it’s just a way to do better as a human. And unfortunately that has obviously diminishing returns. Our best bet as a human is to build tools—software that actually realizes some of the math implied above for instance—and then to listen to what our tools tell us, once we get them properly debugged...
I understand the core idea of it is this.
If we collect evidence about the world, and build a model using unbiased means, and predict the consequences of our possible actions: the only way to do better requires that someone else build a model, openly share how they did it, and show it has better predictions.
Paragraph 2 is being a rational agent and it also includes that 2 rational agents cannot agree to disagree if they share priors.
What this means for real world:
It means if there is a discussion about what to do, while we can never be certain, you cannot do better than the action predicted by a model without advancing a testably better model of your own. This applies to countless political questions.
It means that experts are often wrong, as it is irrelevant how many decades of experience or what position they hold. All that matters is what do they know and how do they know it.
I think the ‘metacognition’ part is a subset of it, it’s just a way to do better as a human. And unfortunately that has obviously diminishing returns. Our best bet as a human is to build tools—software that actually realizes some of the math implied above for instance—and then to listen to what our tools tell us, once we get them properly debugged...