I like issues (2) and (3) in your breakdown, but I don’t think (1) captures an important aspect of the Bayesian/frequentist debate. I don’t really associate frequentism with a denial of probabilism (the claim that the degrees of belief of a rational agent obey the probability calculus). I do think there is an interesting disagreement in the vicinity of (1) about how degrees of belief should be set.
My model of a frequentist is someone who thinks relative frequency should be treated as an expert function: If rf(X) is the relative frequency with which propositions like X are true in some appropriate reference class, then P(X | rf(X) = x) = x. This seems to me the most natural interpretation of the claim that probabilities are just relative frequencies. My frequentist doesn’t answer “no” to (1). She does think that subjective anticipations obey the probability calculus, and this is because relative frequencies obey the calculus and subjective anticipations should be guided by knowledge of relative frequencies. So she treats relative frequency as an expert function, which means she tries to maximize her calibration.
The Bayesian does not think the rational agent should always try to maximize calibration. There are situations where one should be willing to sacrifice calibration for discrimination. Eliezer has a good example of this in A Technical Explanation of Technical Explanation. Here’s my understanding of the difference: The Bayesian treats the truth function (the function that assigns 1 to truths and 0 to falsehoods) as an expert function, and this is is incompatible with treating relative frequency as an expert function. Trying to estimate truth can lead you to intentionally sacrifice calibration for discrimination; trying to maximize calibration cannot.
So maybe (1) should be supplemented with something like this:
(1′) If the answer to (1) is “yes”, whether subjective anticipations should always be guided by beliefs about relative frequencies.
I like issues (2) and (3) in your breakdown, but I don’t think (1) captures an important aspect of the Bayesian/frequentist debate. I don’t really associate frequentism with a denial of probabilism (the claim that the degrees of belief of a rational agent obey the probability calculus). I do think there is an interesting disagreement in the vicinity of (1) about how degrees of belief should be set.
My model of a frequentist is someone who thinks relative frequency should be treated as an expert function: If rf(X) is the relative frequency with which propositions like X are true in some appropriate reference class, then P(X | rf(X) = x) = x. This seems to me the most natural interpretation of the claim that probabilities are just relative frequencies. My frequentist doesn’t answer “no” to (1). She does think that subjective anticipations obey the probability calculus, and this is because relative frequencies obey the calculus and subjective anticipations should be guided by knowledge of relative frequencies. So she treats relative frequency as an expert function, which means she tries to maximize her calibration.
The Bayesian does not think the rational agent should always try to maximize calibration. There are situations where one should be willing to sacrifice calibration for discrimination. Eliezer has a good example of this in A Technical Explanation of Technical Explanation. Here’s my understanding of the difference: The Bayesian treats the truth function (the function that assigns 1 to truths and 0 to falsehoods) as an expert function, and this is is incompatible with treating relative frequency as an expert function. Trying to estimate truth can lead you to intentionally sacrifice calibration for discrimination; trying to maximize calibration cannot.
So maybe (1) should be supplemented with something like this:
(1′) If the answer to (1) is “yes”, whether subjective anticipations should always be guided by beliefs about relative frequencies.