Hi. I’ll mostly be making snarky comments on decision theory related posts.
scmbradley
Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales
— Bertrand Russell, History of Western Philosophy (from the introduction)
it is clear that each party to this dispute – as to all that persist through long periods of time – is partly right and partly wrong
— Bertrand Russell History of Western Philosophy (from the introduction, again.)
I have lots of particular views and some general views on decision theory. I picked on decision theory posts because it’s something I know something about. I know less about some of the other things that crop up on this site…
Sorry I’m new. I don’t understand. What do you mean?
I’ve had rosewater flavoured ice cream.
I bet cabbage ice cream does not taste as nice.
Savage’s representation theorem in Foundations of Statistics starts assuming neither. He just needs some axioms about preference over acts, some independence concepts and some pretty darn strong assumptions about the nature of events.
So it’s possible to do it without assuming a utility scale or a probability function.
This seems to be orthogonal to the current argument. The Dutch book argument says that your will-to-wager fair betting prices for dollar stakes had better conform to the axioms of probability. Cox’s theorem says that your real-valued logic of plausible inference had better conform to the axioms of probability. So you need the extra step of saying that your betting behaviour should match up with your logic of plausible inference before the arguments support each other.
If you weaken your will-to-wager assumption and effectively allow your agents to offer bid-ask spreads on bets (i’ll buy bets on H for x, but sell them for y) then you get “Dutch book like” arguments that show that your beliefs conform to Dempster-Shafer belief functions, or Choquet capacities, depending on what other constraints you allow.
Or, if you allow that the world is non-classical – that the function that decides which propositions are true is not a classical logic valuation function – then you get similar results.
Other arguments for having probability theory be the right representation of belief include representation theorems of various kinds, Cox’s theorem, going straight from qualitative probability orderings, gradational accuracy style arguments…
I think this misses the point, somewhat. There are important norms on rational action that don’t apply only in the abstract case of the perfect bayesian reasoner. For example, some kinds of nonprobabilistic “bid/ask” betting strategies can be Dutch-booked and some can’t. So even if we don’t have point-valued will-to-wager values, there are still sensible and not sensible ways to decide what bets to take.
What do you mean “the statement is affected by a generalisation”? What does it mean for something to be “affected by a generalisation”? What does it mean for a statement to be “affected”?
The claim is a general one. Are general claims always false? I highly doubt that. That said, this generalisation might be false, but it seems like establishing that would require more than just pointing out that the claim is general.
What the Dutch book theorem gives you are restrictions on the kinds of will-to-wager numbers you can exhibit and still avoid sure loss. It’s a big leap to claim that these numbers perfectly reflect what your degrees of belief ought to be.
But that’s not really what’s at issue. The point I was making is that even among imperfect reasoners, there are better and worse ways to reason. We’ve sorted out the perfect case now. It’s been done to death. Let’s look at what kind of imperfect reasoning is best.
This thought isn’t original to me, but it’s probably worth making. It feels like there are two sorts of axioms. I am following tradition in describing them as “rationality axioms” and “structure axioms”. The rationality axioms (like the transitivity of the order among acts) are norms on action. The structure axioms (like P6) aren’t normative at all. (It’s about structure on the world, how bizarre is it to say “The world ought to be such that P6 holds of it”?)
Given this, and given the necessity of the structure axioms for the proof, it feels like Savage’s theorem can’t serve as a justification of Bayesian epistemolgy as a norm of rational behaviour.
Er. What? You can call it a false generalisation all you like, that isn’t in itself enough to convince me it is false. (It may well be false, that’s not what’s at stake here). You seem to be suggesting that merely by calling it a generalisation is enough to impugn its status.
And in homage to your unconvential arguing style, here are some non sequituurs: How many angels can dance on the head of a pin? Did Thomas Aquinas prefer red wine or white wine? Was Stalin lefthanded? What colour were Sherlock Holmes’ eyes?
But why ought the world be such that such a partition exists for us to name? That doesn’t seem normative. I guess there’s a minor normative element in that it demands “If the world conspires to allow us to have partitions like the ones needed in P6, then the agent must be able to know of them and reason about them” but that still seems secondary to the demand that the world is thus and so.
Ah I see now. Glad we cleared that up.
Still, I think there’s something to the idea that if there is a genuine debate about some claim that lasts a long time, then there might well be some truth on either side. So perhaps Russell was wrong to universally quantify over “debates” (as your counterexamples might show), but I think there is something to the claim.
Anyone who can handle a needle convincingly can make us see a thread which isn’t there
-E.H. Gombrich
The greatest challenge to any thinker is stating the problem, in a way that will allow a solution
– Bertrand Russell
P6 entails that there are (uncountably) infinitely many events. It is at least compatible with modern physics that the world is fundamentally discrete both spatially and temporally. The visible universe is bounded. So it may be that there are only finitely many possible configurations of the universe. It’s a big number sure, but if it’s finite, then Savage’s theorem is irrelevant. It doesn’t tell us anything about what to believe in our world. This is perhaps a silly point, and there’s probably a nearby theorem that works for “appropriately large finite worlds”, but still. I don’t think you can just uncritically say “surely the world is thus and so”.
If this is supposed to say something normative about how I should structure my beliefs, then the structural premises should be true of the world I have beliefs about.
The VNM utility theorem implies there is some good we value highest? Where has this come from? I can’t see how this could be true. The utility theorem only applies once you’ve fixed what your decision problem looks like…