The context for this post is that I’ve had qualms about bayesian epistemology for most of the last decade. My most notable attempts to express them previously were Realism about rationality and Against strong bayesianism. In hindsight, those posts weren’t great, but they’re interesting as documentation of waypoints on my intellectual journey (see also here and here). This post is another such waypoint. Since writing it last year, I’ve built on these ideas (and my qualmsabout expectedutility maximization) to continue developing my theory of coalitional agency. I don’t know how compelled most readers feel by what I’ve written publicly about this research agenda thus far (i.e. this sequence, most of the posts on this blog, and somerecentshortforms) but I’m very excited about it and expect to make significant progress on it in 2026.
I’m also still fairly happy with this post specifically, and expect that it will stand the test of time better than the other two above (in part because it’s starting to articulate a positive vision rather than just bashing bayesianism). My main regret is on a pedagogical level: it was a mistake to start with point 1 (fuzzy truth values) rather than point 2 (the semantic view). I think it gave people the impression that I was primarily trying to defend fuzzy truth-values. But most formal accounts of fuzzy truth-values seem pretty useless. My main point was actually that epistemology should be formulated in terms of models—and that, once we do so, it’s hard to avoid assigning those models something like degrees of truth (even if we don’t precisely know how yet).
I’m also unsure about whether explaining “reason in terms of models” in terms of mathematical logic was a good idea. @Kaarel has a critique in the comments below which I’ve been slowly chewing on, and which deserves a substantive response.
My most substantive exchange in the comments was with @johnswentworth. This didn’t update me much. Here’s how John wanted to deal with vague propositions:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
Basically, he wants to treat vagueness as a kind of inherent uncertainty that no amount of data can resolve. But vagueness and uncertainty are just different things! Most notably, uncertainty follows the laws of probability, whereas vagueness doesn’t. As a concrete example from the post itself:
“the Earth is a sphere” is mostly true, and “every point on the surface of a sphere is equally far away from its center” is precisely true. But “every point on the surface of the Earth is equally far away from the Earth’s center” seems ridiculous
Suppose John assigns [0.8, 0.9] credence to “the Earth is a sphere”, as his way of formalizing the vagueness of what counts as a sphere, and takes it as a tautology that every point on the surface of a sphere is equally far away from its center. Then he should assign [0.8, 0.9] credence to “every point on the surface of the Earth is equally far away from the Earth’s center”. But of course the latter statement is clearly false. There are probably various clever ways to try to escape this problem, but I don’t think any of them deal with the core issue.
A more promising way to think about uncertainty vs vagueness: uncertainty is a description of your epistemic state within the context of a fixed “language game”, whereas vagueness involves a meta-game in which you might vary which language you’re using (either for coordination with other people or for coordination between your internal subagents). I’d like to eventually be able to formalize this perspective.
The context for this post is that I’ve had qualms about bayesian epistemology for most of the last decade. My most notable attempts to express them previously were Realism about rationality and Against strong bayesianism. In hindsight, those posts weren’t great, but they’re interesting as documentation of waypoints on my intellectual journey (see also here and here). This post is another such waypoint. Since writing it last year, I’ve built on these ideas (and my qualms about expected utility maximization) to continue developing my theory of coalitional agency. I don’t know how compelled most readers feel by what I’ve written publicly about this research agenda thus far (i.e. this sequence, most of the posts on this blog, and some recent shortforms) but I’m very excited about it and expect to make significant progress on it in 2026.
I’m also still fairly happy with this post specifically, and expect that it will stand the test of time better than the other two above (in part because it’s starting to articulate a positive vision rather than just bashing bayesianism). My main regret is on a pedagogical level: it was a mistake to start with point 1 (fuzzy truth values) rather than point 2 (the semantic view). I think it gave people the impression that I was primarily trying to defend fuzzy truth-values. But most formal accounts of fuzzy truth-values seem pretty useless. My main point was actually that epistemology should be formulated in terms of models—and that, once we do so, it’s hard to avoid assigning those models something like degrees of truth (even if we don’t precisely know how yet).
I’m also unsure about whether explaining “reason in terms of models” in terms of mathematical logic was a good idea. @Kaarel has a critique in the comments below which I’ve been slowly chewing on, and which deserves a substantive response.
My most substantive exchange in the comments was with @johnswentworth. This didn’t update me much. Here’s how John wanted to deal with vague propositions:
Basically, he wants to treat vagueness as a kind of inherent uncertainty that no amount of data can resolve. But vagueness and uncertainty are just different things! Most notably, uncertainty follows the laws of probability, whereas vagueness doesn’t. As a concrete example from the post itself:
Suppose John assigns [0.8, 0.9] credence to “the Earth is a sphere”, as his way of formalizing the vagueness of what counts as a sphere, and takes it as a tautology that every point on the surface of a sphere is equally far away from its center. Then he should assign [0.8, 0.9] credence to “every point on the surface of the Earth is equally far away from the Earth’s center”. But of course the latter statement is clearly false. There are probably various clever ways to try to escape this problem, but I don’t think any of them deal with the core issue.
A more promising way to think about uncertainty vs vagueness: uncertainty is a description of your epistemic state within the context of a fixed “language game”, whereas vagueness involves a meta-game in which you might vary which language you’re using (either for coordination with other people or for coordination between your internal subagents). I’d like to eventually be able to formalize this perspective.
As a final note, there are also a bunch of comments on the version of the post on my blog which LW readers might find interesting.