Desire to appear smart (mostly in front of people with very good epistemics, where incentives are relatively aligned to truth-oriented thinking and criticizing others and changing one’s mind is incentivized but not overincentivized, but still).
When I think of a (new) piece of evidence/argument, my mind often initially over-updates into that direction for a minute or so, until I have integrated it into my overall model. (This happens in both directions. Aka I think my intuitive beliefs fluctuate more than makes sense from a Bayesian perspective, though I keep track on the meta level that I might not endorse my current intuitive pessimism/optimism about something and still need to evaluate it more neurally later.)
The last thing may result from a hard-coded genetic heuristic learning rate. We can’t update fully Bayesian and a learning rate is an approximation given computational constraints. There is an optimal learning rate, but it depends on context, such as the trust in prior information, esp. the volatility of the environment. And thus it may happen that your genetic prior for your learning rate may not match the dynamics of your current environment. I guess our modern environment changes faster than the ancestral environment and most people update to slowly on new information. Updating much faster is probably adaptive. I also have that.
Stuff I noticed so far from thinking about this:
Sensation of desire for closure.
Desire to appear smart (mostly in front of people with very good epistemics, where incentives are relatively aligned to truth-oriented thinking and criticizing others and changing one’s mind is incentivized but not overincentivized, but still).
When I think of a (new) piece of evidence/argument, my mind often initially over-updates into that direction for a minute or so, until I have integrated it into my overall model. (This happens in both directions. Aka I think my intuitive beliefs fluctuate more than makes sense from a Bayesian perspective, though I keep track on the meta level that I might not endorse my current intuitive pessimism/optimism about something and still need to evaluate it more neurally later.)
The last thing may result from a hard-coded genetic heuristic learning rate. We can’t update fully Bayesian and a learning rate is an approximation given computational constraints. There is an optimal learning rate, but it depends on context, such as the trust in prior information, esp. the volatility of the environment. And thus it may happen that your genetic prior for your learning rate may not match the dynamics of your current environment. I guess our modern environment changes faster than the ancestral environment and most people update to slowly on new information. Updating much faster is probably adaptive. I also have that.