Calibration using search: anytime I am searching for something with a quantitative answer I have the chance to do a fermi estimate/reference class forecasting and getting feedback on how I did.
Selection effects/Straussian readings: trying to figure out what incentives drove a particular piece of information to be in front of me in this moment.
Stack trace: finding the provenance of internal maps and noticing that they are often predicated on extremely sparse data which is then overgeneralized.
Schematic thinking: An extension of narrative fallacy bias. Noticing when alternatives would be equally valid when replacing parts of arguments. The implied degrees of freedom make the proposed explanation weaker than it might otherwise seem.
Calibration using search: anytime I am searching for something with a quantitative answer I have the chance to do a fermi estimate/reference class forecasting and getting feedback on how I did.
Same as posts on fermi estimates. I just make a guess to whatever level of effort seems appropriate for the query (often pretty casual, but will take a bit more time if it’s about something I feel is important or especially uncertain about). Then when I am getting the piece of info I can reflect on reasons I might have been off. This often helps structure my inquiry into the piece of info also as I map out model differences that are why it surprised me.
I did wind up with some personal katas.
Calibration using search: anytime I am searching for something with a quantitative answer I have the chance to do a fermi estimate/reference class forecasting and getting feedback on how I did.
Selection effects/Straussian readings: trying to figure out what incentives drove a particular piece of information to be in front of me in this moment.
Stack trace: finding the provenance of internal maps and noticing that they are often predicated on extremely sparse data which is then overgeneralized.
Schematic thinking: An extension of narrative fallacy bias. Noticing when alternatives would be equally valid when replacing parts of arguments. The implied degrees of freedom make the proposed explanation weaker than it might otherwise seem.
Do you have an explicit process for this?
Same as posts on fermi estimates. I just make a guess to whatever level of effort seems appropriate for the query (often pretty casual, but will take a bit more time if it’s about something I feel is important or especially uncertain about). Then when I am getting the piece of info I can reflect on reasons I might have been off. This often helps structure my inquiry into the piece of info also as I map out model differences that are why it surprised me.
Do you write the guess down somewhere or just keep it in your head?
Written down if I need to multiply a few values to get a ballpark. In head if it’s just a direct guess.