Bayesianism says that we should ideally reason in terms of:
Where is it defined this way?
I read the six volumes of Yudkowsky’s Rationality A-Z and nodded along, then saw somebody treating “bayesianism” as basically “subjective degrees of belief plus subjective updating”―which struck me as a dumb watering-down. Reading through this list I was uncomfortable with #1 (because good reasoning can be somehow richer than binary, and as Richard says, fuzzy), #4 (we want our subjective credences to behave like real probabilities, but I don’t really expect them to), and #5 (again we’d like to, but can at best approximate). Now, at the top it says we should “ideally” reason this way, which accounts for such human failings, but #5 also requires a strong sense of priors and where they come from, and I never got that by reading Rationality A-Z.
Re: #2/#5 I read a nice article at some point that I can no longer find, which introduced a concept whose name I forgot. The concept was a sense of solidity or justification of belief, where if an expert on country X gives you a 50% chance that event E happens in X in the next year, that could (in principle) be a really solid 50%, way better than 50% from a layman. ChatGPT tells me this distinction is “credence”―a terrible name, as “credence” is often used to refer to subjective probability itself (the 50%). Claude OTOH offered “resilience of credence” or “robustness of probability”, but I think in the article I read it was just a single word. Anyone remember this? It’s weird how little we talk about this, for it is required for a proper bayesian update.
Where is it defined this way?
I read the six volumes of Yudkowsky’s Rationality A-Z and nodded along, then saw somebody treating “bayesianism” as basically “subjective degrees of belief plus subjective updating”―which struck me as a dumb watering-down. Reading through this list I was uncomfortable with #1 (because good reasoning can be somehow richer than binary, and as Richard says, fuzzy), #4 (we want our subjective credences to behave like real probabilities, but I don’t really expect them to), and #5 (again we’d like to, but can at best approximate). Now, at the top it says we should “ideally” reason this way, which accounts for such human failings, but #5 also requires a strong sense of priors and where they come from, and I never got that by reading Rationality A-Z.
Re: #2/#5 I read a nice article at some point that I can no longer find, which introduced a concept whose name I forgot. The concept was a sense of solidity or justification of belief, where if an expert on country X gives you a 50% chance that event E happens in X in the next year, that could (in principle) be a really solid 50%, way better than 50% from a layman. ChatGPT tells me this distinction is “credence”―a terrible name, as “credence” is often used to refer to subjective probability itself (the 50%). Claude OTOH offered “resilience of credence” or “robustness of probability”, but I think in the article I read it was just a single word. Anyone remember this? It’s weird how little we talk about this, for it is required for a proper bayesian update.