While a true Bayesian’s estimate already includes the probability distributions of future experiments, in practice I don’t think it’s easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it’s hard to do in practice until I actually read the book that provides the nuance.
In a context like that, I definitely have experienced the feeling of “I am pretty sure that I will believe X less confidently upon further research, but right now I can’t help but feel very confident in X.”
Thank you—this is an important distinction. Are we talking about how something feels, or about probability estimates? I’d argue the error is in using numbers and probability notation to describe feelings of confidence that you haven’t actually tried to be rational about.
The topic of illegible beliefs (related to aliefs), and how to apply math to them is virtually unexplored.
While a true Bayesian’s estimate already includes the probability distributions of future experiments, in practice I don’t think it’s easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it’s hard to do in practice until I actually read the book that provides the nuance.
In a context like that, I definitely have experienced the feeling of “I am pretty sure that I will believe X less confidently upon further research, but right now I can’t help but feel very confident in X.”
Thank you—this is an important distinction. Are we talking about how something feels, or about probability estimates? I’d argue the error is in using numbers and probability notation to describe feelings of confidence that you haven’t actually tried to be rational about.
The topic of illegible beliefs (related to aliefs), and how to apply math to them is virtually unexplored.
In practice what I’m trying to do with practices like calibration training is determine the latter from the former.