Subjective and Objective Bayesianism. Are there constraints on prior probabilities other than the probability laws? Consider a situation in which you are to draw a ball from an urn filled with red and black balls. Suppose you have no other information about the urn. What is the prior probability (before drawing a ball) that, given that a ball is drawn from the urn, that the drawn ball will be black? The question divides Bayesians into two camps:
(a) Subjective Bayesians emphasize the relative lack of rational constraints on prior probabilities. In the urn example, they would allow that any prior probability between 0 and 1 might be rational (though some Subjective Bayesians (e.g., Jeffrey) would rule out the two extreme values, 0 and 1). The most extreme Subjective Bayesians (e.g., de Finetti) hold that the only rational constraint on prior probabilities is probabilistic coherence. Others (e.g., Jeffrey) classify themselves as subjectivists even though they allow for some relatively small number of additional rational constraints on prior probabilities. Since subjectivists can disagree about particular constraints, what unites them is that their constraints rule out very little. For Subjective Bayesians, our actual prior probability assignments are largely the result of non-rational factors—for example, our own unconstrained, free choice or evolution or socialization.
(b) Objective Bayesians (e.g., Jaynes and Rosenkrantz) emphasize the extent to which prior probabilities are rationally constrained. In the above example, they would hold that rationality requires assigning a prior probability of 1⁄2 to drawing a black ball from the urn. They would argue that any other probability would fail the following test: Since you have no information at all about which balls are red and which balls are black, you must choose prior probabilities that are invariant with a change in label (“red” or “black”). But the only prior probability assignment that is invariant in this way is the assignment of prior probability of 1⁄2 to each of the two possibilities (i.e., that the ball drawn is black or that it is red).
In the limit, an Objective Bayesian would hold that rational constraints uniquely determine prior probabilities in every circumstance. This would make the prior probabilities logical probabilities determinable purely a priori.
Under these definitions, Eliezer and LW in general fall under the Objective category. We tend to believe that two agents with the same knowledge should assign the same probability.
I think everyone agrees on the directions “more subjective” and “more objective”, but they use the words “subjective”/”objective” to mean “more subjective/objective than me”.
A very subjective position would be to believe that there are no “right” prior probabilities, and that it’s okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)
A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1⁄2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)
Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called “objective” by the maximally subjective folk, and “subjective” by the maximally objective folk.
The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there’s a lot of room on the scale past the SEP definition of “objective”.
Under his definitions he’s subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.
What about the idea of “well calibrated judgement”, where I must be right 9 times of 10 when I say something has 90 per cent probability? Yudkowsky discussed it in the article “Cognitive biases in global risks”, if I correctly remember.
In that case I assigned some probability distribution over my judgements which could be about completely different external objects?
I’m not quite sure what you mean here, but I don’t think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.
The SEP is quite good on this subject:
Under these definitions, Eliezer and LW in general fall under the Objective category. We tend to believe that two agents with the same knowledge should assign the same probability.
It seems different than gim (and my) explanations below, where “subjective Bayesianism” means applying probability about states of knowledge.
I think everyone agrees on the directions “more subjective” and “more objective”, but they use the words “subjective”/”objective” to mean “more subjective/objective than me”.
A very subjective position would be to believe that there are no “right” prior probabilities, and that it’s okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)
A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1⁄2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)
Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called “objective” by the maximally subjective folk, and “subjective” by the maximally objective folk.
The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there’s a lot of room on the scale past the SEP definition of “objective”.
Also, here’s Eliezer on the subject: Probability is Subjectively Objective
Under his definitions he’s subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.
What about the idea of “well calibrated judgement”, where I must be right 9 times of 10 when I say something has 90 per cent probability? Yudkowsky discussed it in the article “Cognitive biases in global risks”, if I correctly remember.
In that case I assigned some probability distribution over my judgements which could be about completely different external objects?
I’m not quite sure what you mean here, but I don’t think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.