Mental Calibration for Bayesian Updates?

Hey all,

After reading “How to Measure Anything” I’ve experimented a bit with calibration training and using his calibration tools, and after being convinced by his data on the usefulness of calibration in forecasting for the real world, have seen a big update in my own calibration.

I’m wondering if anybody knows of similar tools and studies on calibration of Bayesian updating. Broadly,I imagine it would look like:

1. Using the tools and calibration methods I already use to figure out how the feeling of “correctness” of my prior correlates to a numerical value.

2. Using similar (but probably not identical) tools to figure out how “convincing” the new data feels correlates to specific numbers.

3. Calibrating these two numbers to bayes theorom, such that I know approximately how much to update the original feeling to reflect the new information

4. Using mmenomic or visualization techniques to pair the new feeling with the belief, so that next time I remembered the belief, I’d feel the slightly different calibration.

Anyways, I’m curious if anyone has experimented with these processes, if there’s any research on it, or it has been previously experimented with on lesswrong. I’d definitely like to lock down a similar procedure for myself.

I should note that many times, I already do this naturally… but my guess is I systematically over and under update the feeling based on confirmation bias. I’d like to recalibrate my recalibration :).