Feels like this points at correct things and I’m amenable to it being one of the top posts for 2024. It didn’t change much for me (as opposed to @Ben Pace, who thinks about it many times per month according to his review) or feel so spot on that I’d want to give a high vote. I’ll probably give something between 1-4.
Areas where I think it strikes me (admittedly with not that much thought or careful reading) as not perfectly right:
Notwithstanding the heading contra this, my instinct to want to reduce “believing in” statements to a combination of “I believe (Bayesian-style) that good things happen if I invest in X” + “I am publicly declaring myself for X (kickstarter / commitment mechanism)”. Which is a little bit interesting, but also known phenomena. Added to that, you get boring old motivated cognition to tell yourself “I’ll get this done in three hours”. This might be an effective semi-self-aware self-deception to get yourself to do things that you wouldn’t otherwise do, but that is also manipulation of the Bayesian belief slots in your head in order to get some result.
So believing in’s are Bayesian beliefs with some indirection + an expression of commitment and/or group affiliation. If so, that is useful to point out.
An extension here that’d be neat is to analyze how often expressed “values” are believing-in’s, e.g. “I believe in family”, “I believe in democracy”. If those are actually just Bayesian beliefs + commitment, then they’re a lot more defeasible than the intrinsic inherent base “values” LessWrong normally talks about.
Feels like this points at correct things and I’m amenable to it being one of the top posts for 2024. It didn’t change much for me (as opposed to @Ben Pace, who thinks about it many times per month according to his review) or feel so spot on that I’d want to give a high vote. I’ll probably give something between 1-4.
Areas where I think it strikes me (admittedly with not that much thought or careful reading) as not perfectly right:
Notwithstanding the heading contra this, my instinct to want to reduce “believing in” statements to a combination of “I believe (Bayesian-style) that good things happen if I invest in X” + “I am publicly declaring myself for X (kickstarter / commitment mechanism)”. Which is a little bit interesting, but also known phenomena. Added to that, you get boring old motivated cognition to tell yourself “I’ll get this done in three hours”. This might be an effective semi-self-aware self-deception to get yourself to do things that you wouldn’t otherwise do, but that is also manipulation of the Bayesian belief slots in your head in order to get some result.
So believing in’s are Bayesian beliefs with some indirection + an expression of commitment and/or group affiliation. If so, that is useful to point out.
An extension here that’d be neat is to analyze how often expressed “values” are believing-in’s, e.g. “I believe in family”, “I believe in democracy”. If those are actually just Bayesian beliefs + commitment, then they’re a lot more defeasible than the intrinsic inherent base “values” LessWrong normally talks about.