# Interpretations of “probability”

(Writ­ten for Ar­bital in 2016.)

What does it mean to say that a flipped coin has a 50% prob­a­bil­ity of land­ing heads?

His­tor­i­cally, there are two pop­u­lar types of an­swers to this ques­tion, the “fre­quen­tist” and “sub­jec­tive” (aka “Bayesian”) an­swers, which give rise to rad­i­cally differ­ent ap­proaches to ex­per­i­men­tal statis­tics. There is also a third “propen­sity” view­point which is largely dis­cred­ited (as­sum­ing the coin is de­ter­minis­tic). Roughly, the three ap­proaches an­swer the above ques­tion as fol­lows:

• The propen­sity in­ter­pre­ta­tion: Some prob­a­bil­ities are just out there in the world. It’s a brute fact about coins that they come up heads half the time. When we flip a coin, it has a fun­da­men­tal propen­sity of 0.5 for the coin to show heads. When we say the coin has a 50% prob­a­bil­ity of be­ing heads, we’re talk­ing di­rectly about this propen­sity.

• The fre­quen­tist in­ter­pre­ta­tion: When we say the coin has a 50% prob­a­bil­ity of be­ing heads af­ter this flip, we mean that there’s a class of events similar to this coin flip, and across that class, coins come up heads about half the time. That is, the fre­quency of the coin com­ing up heads is 50% in­side the event class, which might be “all other times this par­tic­u­lar coin has been tossed” or “all times that a similar coin has been tossed”, and so on.

• The sub­jec­tive in­ter­pre­ta­tion: Uncer­tainty is in the mind, not the en­vi­ron­ment. If I flip a coin and slap it against my wrist, it’s already landed ei­ther heads or tails. The fact that I don’t know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim “I think this coin is heads with prob­a­bil­ity 50%” is an ex­pres­sion of my own ig­no­rance, and 50% prob­a­bil­ity means that I’d bet at 1 : 1 odds (or bet­ter) that the coin came up heads.

For a vi­su­al­iza­tion of the differ­ences be­tween these three view­points, see Cor­re­spon­dence vi­su­al­iza­tions for differ­ent in­ter­pre­ta­tions of “prob­a­bil­ity”. For ex­am­ples of the differ­ence, see Prob­a­bil­ity in­ter­pre­ta­tions: Ex­am­ples. See also the Stan­ford En­cy­clo­pe­dia of Philos­o­phy ar­ti­cle on in­ter­pre­ta­tions of prob­a­bil­ity.

The propen­sity view is per­haps the most in­tu­itive view, as for many peo­ple, it just feels like the coin is in­trin­si­cally ran­dom. How­ever, this view is difficult to rec­on­cile with the idea that once we’ve flipped the coin, it has already landed heads or tails. If the event in ques­tion is de­cided de­ter­minis­ti­cally, the propen­sity view can be seen as an in­stance of the mind pro­jec­tion fal­lacy: When we men­tally con­sider the coin flip, it feels 50% likely to be heads, so we find it very easy to imag­ine a world in which the coin is fun­da­men­tally 50%-heads-ish. But that feel­ing is ac­tu­ally a fact about us, not a fact about the coin; and the coin has no phys­i­cal 0.5-heads-propen­sity hid­den in there some­where — it’s just a coin.

The other two in­ter­pre­ta­tions are both self-con­sis­tent, and give rise to prag­mat­i­cally differ­ent statis­ti­cal tech­niques, and there has been much de­bate as to which is prefer­able. The sub­jec­tive in­ter­pre­ta­tion is more gen­er­ally ap­pli­ca­ble, as it al­lows one to as­sign prob­a­bil­ities (in­ter­preted as bet­ting odds) to one-off events.

## Fre­quen­tism vs subjectivism

As an ex­am­ple of the differ­ence be­tween fre­quen­tism and sub­jec­tivism, con­sider the ques­tion: “What is the prob­a­bil­ity that Hillary Clin­ton will win the 2016 US pres­i­den­tial elec­tion?”, as an­a­lyzed in the sum­mer of 2016.

A stereo­typ­i­cal (straw) fre­quen­tist would say, “The 2016 pres­i­den­tial elec­tion only hap­pens once. We can’t ob­serve a fre­quency with which Clin­ton wins pres­i­den­tial elec­tions. So we can’t do any statis­tics or as­sign any prob­a­bil­ities here.”

A stereo­typ­i­cal sub­jec­tivist would say: “Well, pre­dic­tion mar­kets tend to be pretty well-cal­ibrated about this sort of thing, in the sense that when pre­dic­tion mar­kets as­sign 20% prob­a­bil­ity to an event, it hap­pens around 1 time in 5. And the pre­dic­tion mar­kets are cur­rently bet­ting on Hillary at about 3 : 1 odds. Thus, I’m com­fortable say­ing she has about a 75% chance of win­ning. If some­one offered me 20 : 1 odds against Clin­ton — they get \$1 if she loses, I get \$20 if she wins — then I’d take the bet. I sup­pose you could re­fuse to take that bet on the grounds that you Just Can’t Talk About Prob­a­bil­ities of One-off Events, but then you’d be pointlessly pass­ing up a re­ally good bet.”

A stereo­typ­i­cal (non-straw) fre­quen­tist would re­ply: “I’d take that bet too, of course. But my tak­ing that bet is not based on rigor­ous episte­mol­ogy, and we shouldn’t al­low that sort of think­ing in ex­per­i­men­tal sci­ence and other im­por­tant venues. You can do sub­jec­tive rea­son­ing about prob­a­bil­ities when mak­ing bets, but we should ex­clude sub­jec­tive rea­son­ing in our sci­en­tific jour­nals, and that’s what fre­quen­tist statis­tics is de­signed for. Your pa­per should not con­clude “and there­fore, hav­ing ob­served thus-and-such data about car­bon diox­ide lev­els, I’d per­son­ally bet at 9 : 1 odds that an­thro­pogenic global warm­ing is real,” be­cause you can’t build sci­en­tific con­sen­sus on opinions.”

...and then it starts get­ting com­pli­cated. The sub­jec­tivist re­sponds “First of all, I agree you shouldn’t put pos­te­rior odds into pa­pers, and sec­ond of all, it’s not like your method is truly ob­jec­tive — the choice of “similar events” is ar­bi­trary, abus­able, and has given rise to p-hack­ing and the repli­ca­tion crisis.” The fre­quen­tists say “well your choice of prior is even more sub­jec­tive, and I’d like to see you do bet­ter in an en­vi­ron­ment where peer pres­sure pushes peo­ple to abuse statis­tics and ex­ag­ger­ate their re­sults,” and then down the rab­bit hole we go.

The sub­jec­tivist in­ter­pre­ta­tion of prob­a­bil­ity is com­mon among ar­tifi­cial in­tel­li­gence re­searchers (who of­ten de­sign com­puter sys­tems that ma­nipu­late sub­jec­tive prob­a­bil­ity dis­tri­bu­tions), Wall Street traders (who need to be able to make bets even in rel­a­tively unique situ­a­tions), and com­mon in­tu­ition (where peo­ple feel like they can say there’s a 30% chance of rain to­mor­row with­out wor­ry­ing about the fact that to­mor­row only hap­pens once). Nev­er­the­less, the fre­quen­tist in­ter­pre­ta­tion is com­monly taught in in­tro­duc­tory statis­tics classes, and is the gold stan­dard for most sci­en­tific jour­nals.

A com­mon fre­quen­tist stance is that it is vir­tu­ous to have a large toolbox of statis­ti­cal tools at your dis­posal. Sub­jec­tivist tools have their place in that toolbox, but they don’t de­serve any par­tic­u­lar pri­macy (and they aren’t gen­er­ally ac­cepted when it comes time to pub­lish in a sci­en­tific jour­nal).

An ag­gres­sive sub­jec­tivist stance is that fre­quen­tists have in­vented some in­ter­est­ing tools, and many of them are use­ful, but that re­fus­ing to con­sider sub­jec­tive prob­a­bil­ities is toxic. Fre­quen­tist statis­tics were in­vented in a (failed) at­tempt to keep sub­jec­tivity out of sci­ence in a time be­fore hu­man­ity re­ally un­der­stood the laws of prob­a­bil­ity the­ory. Now we have the­o­rems about how to man­age sub­jec­tive prob­a­bil­ities cor­rectly, and how to fac­tor per­sonal be­liefs out from the ob­jec­tive ev­i­dence pro­vided by the data, and if you ig­nore these the­o­rems you’ll get in trou­ble. The fre­quen­tist in­ter­pre­ta­tion is bro­ken, and that’s why sci­ence has p-hack­ing and a repli­ca­tion crisis even as all the wall-street traders and AI sci­en­tists use the Bayesian in­ter­pre­ta­tion. This “let’s com­pro­mise and agree that ev­ery­one’s view­point is valid” thing is all well and good, but how much worse do things need to get be­fore we say “oops” and start ac­knowl­edg­ing the sub­jec­tive prob­a­bil­ity in­ter­pre­ta­tion across all fields of sci­ence?

The most com­mon stance among sci­en­tists and re­searchers is much more ag­nos­tic, along the lines of “use what­ever statis­ti­cal tech­niques work best at the time, and use fre­quen­tist tech­niques when pub­lish­ing in jour­nals be­cause that’s what ev­ery­one’s been do­ing for decades upon decades upon decades, and that’s what ev­ery­one’s ex­pect­ing.”

## Which in­ter­pre­ta­tion is most use­ful?

Prob­a­bly the sub­jec­tive in­ter­pre­ta­tion, be­cause it sub­sumes the propen­sity and fre­quen­tist in­ter­pre­ta­tions as spe­cial cases, while be­ing more flex­ible than both.

When the fre­quen­tist “similar event” class is clear, the sub­jec­tivist can take those fre­quen­cies (of­ten called base rates in this con­text) into ac­count. But un­like the fre­quen­tist, she can also com­bine those base rates with other ev­i­dence that she’s seen, and as­sign prob­a­bil­ities to one-off events, and make money in pre­dic­tion mar­kets and/​or stock mar­kets (when she knows some­thing that the mar­ket doesn’t).

When the laws of physics ac­tu­ally do “con­tain un­cer­tainty”, such as when they say that there are mul­ti­ple differ­ent ob­ser­va­tions you might make next with differ­ing like­li­hoods (as the Schrod­inger equa­tion of­ten will), a sub­jec­tivist can com­bine her propen­sity-style un­cer­tainty with her per­sonal un­cer­tainty in or­der to gen­er­ate her ag­gre­gate sub­jec­tive prob­a­bil­ities. But un­like a propen­sity the­o­rist, she’s not forced to think that all un­cer­tainty is phys­i­cal un­cer­tainty: She can act like a propen­sity the­o­rist with re­spect to Schrod­inger-equa­tion-in­duced un­cer­tainty, while still be­liev­ing that her un­cer­tainty about a coin that has already been flipped and slapped against her wrist is in her head, rather than in the coin.

This fully gen­eral stance is con­sis­tent with the be­lief that fre­quen­tist tools are use­ful for an­swer­ing fre­quen­tist ques­tions: The fact that you can per­son­ally as­sign prob­a­bil­ities to one-off events (and, e.g., eval­u­ate how good a cer­tain trade is on a pre­dic­tion mar­ket or a stock mar­ket) does not mean that tools la­beled “Bayesian” are always bet­ter than tools la­beled “fre­quen­tist”. What­ever in­ter­pre­ta­tion of “prob­a­bil­ity” you use, you’re en­couraged to use what­ever statis­ti­cal tool works best for you at any given time, re­gard­less of what “camp” the tool comes from. Don’t let the fact that you think it’s pos­si­ble to as­sign prob­a­bil­ities to one-off events pre­vent you from us­ing use­ful fre­quen­tist tools!

• The idea that “prob­a­bil­ity” is some pre­ex­ist­ing thing that needs to be “in­ter­preted” as some­thing always seemed a lit­tle bit back­wards to me. Isn’t it more straight­for­ward to say:

1. Beliefs ex­ist, and obey the Kol­mogorov ax­ioms (at least, “cor­rect” be­liefs do, as for­mal­ized by gen­er­al­iza­tions of logic (Cox’s the­o­rem), or by pos­si­ble-world-count­ing). This is what we re­fer to as “bayesian prob­a­bil­ities”, and code into AIs when we want to them to rep­re­sent be­liefs.

2. Mea­sures over imag­i­nary event classes /​ en­sem­bles also obey the Kol­mogorov ax­ioms. “Fre­quen­tist prob­a­bil­ities” fall into this cat­e­gory.

Per­son­ally I mostly think about #1 be­cause I’m in­ter­ested in figur­ing out what I should be­lieve, not about fre­quen­cies in ar­bi­trary en­sem­bles. But the fact is that both of these obey the same “prob­a­bil­ity” ax­ioms, the Kol­mogorov ax­ioms. Deny­ing one or the other be­cause “prob­a­bil­ity” must be “in­ter­preted” as ex­clu­sively ei­ther #1 or #2 is sim­ply wrong (but that’s what fre­quen­tists effec­tively do when they loudly shout that you “can’t” ap­ply prob­a­bil­ity to be­liefs).

Now, some­times you do need to in­ter­pret “prob­a­bil­ity” as some­thing—in the spe­cific case where some­one else makes an ut­ter­ance con­tain­ing the word “prob­a­bil­ity” and you want to figure out what they meant. But the an­swer there is prob­a­bly that in many cases peo­ple don’t even dis­t­in­guish be­tween #1 and #2, be­cause they’ll only com­mit to a spe­cific num­ber when there’s a con­ve­nient in­stance of #2 that make #1 easy to calcu­late. For in­stance, say­ing 16 for a roll of a “fair” die.

Peo­ple of­ten act as though their ut­ter­ances about prob­a­bil­ity re­fer to #1 though. For in­stance when they mis­in­ter­pret p-val­ues as the post-data prob­a­bil­ity of the null hy­poth­e­sis and go around be­liev­ing that the effect is real...

• You might be in­ter­ested in some work by Glenn Shafer and Vladimir Vovk about re­plac­ing mea­sure the­ory with a game-the­o­retic ap­proach. They have a web­site here, and I wrote a lay re­view of their first book on the sub­ject here.

I have also just now dis­cov­ered that a new book is due out in May, which pre­sum­ably cap­tures the last 18 years or so of re­search on the sub­ject.

This isn’t re­ally a di­rect re­sponse to your post, ex­cept in­so­far as I feel broadly the same way about the Kol­mogorov ax­ioms as you do about in­ter­pret­ing their ap­pli­ca­tion to phe­nom­ena, and this is an­other way of get­ting at the same in­tu­itions.

• There’s a Q&A with one of the au­thors here which ex­plains a lit­tle about the pur­pose of the ap­proach, mainly talks about the new book.

• I clicked this be­cause it seemed in­ter­est­ing, but read­ing the Q&A:

In atyp­i­cal game we con­sider, one player offers bets, an­other de­cides how to bet, and a third de­cides the out­come of the bet. We of­ten call the first player Fore­caster, the sec­ond Skep­tic, and the third Real­ity.

How is this any differ­ent from the clas­si­cal Dutch Book ar­gu­ment, that un­less you main­tain be­liefs as prob­a­bil­ities you will in­evitably lose money?

• It’s just a differ­ent way of ar­riv­ing at the same con­clu­sions. The whole pro­ject is de­vel­op­ing game-the­o­retic proofs for re­sults in prob­a­bil­ity and fi­nance.

The pitch is, rather than us­ing a Dutch Book ar­gu­ment as a sep­a­rate sin­gu­lar ar­gu­ment, they make those in­tu­itions cen­tral as a mechanism of proof for all of prob­a­bil­ity (or at least the core of it, thus far).

• The claim “I think this coin is heads with prob­a­bil­ity 50%” is an ex­pres­sion of my own ig­no­rance, and 50% prob­a­bil­ity means that I’d bet at 1 : 1 odds (or bet­ter) that the coin came up heads.

Just a minor quib­ble—us­ing this in­ter­pre­ta­tion to define one’s sub­jec­tive prob­a­bil­ities is prob­le­matic be­cause peo­ple are not nec­es­sar­ily in­differ­ent about plac­ing a bet that has an ex­pected value of 0 (e.g. due to loss aver­sion).

There­fore, I think the fol­low­ing in­ter­pre­ta­tion is more use­ful: Sup­pose I win [some re­ward] if the coin comes up heads. I’d pre­fer to re­place the win­ning con­di­tion with “the ball in a roulette wheel ends up in a red slot” for any roulette wheel in which more than 50% of the slots are red.

(I think I first came across this type of defi­ni­tion in this post by An­drew Critch)

• The sub­jec­tive in­ter­pre­ta­tion: Uncer­tainty is in the mind, not the en­vi­ron­ment. If I flip a coin and slap it against my wrist, it’s already landed ei­ther heads or tails. The fact that I don’t know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim “I think this coin is heads with prob­a­bil­ity 50%” is an ex­pres­sion of my own ig­no­rance, and 50% prob­a­bil­ity means that I’d bet at 1 : 1 odds (or bet­ter) that the coin came up heads.

Hold on, you’re pul­ling a fast one here—you’ve sub­sti­tuted the ques­tion of “what is the prob­a­bil­ity that this coin which I have already flipped but haven’t looked at yet has already landed heads” for the ques­tion of “what is the prob­a­bil­ity that this coin which I am about to flip will land lands”!

It is ob­vi­ously easy to see what the sub­jec­tive in­ter­pre­ta­tion means in the case of the former ques­tion—as you say, the coin is already heads or tails, no mat­ter that I don’t know which it is. But it is not so easy to see how the sub­jec­tive in­ter­pre­ta­tion makes sense when ap­plied to the lat­ter ques­tion—and that is what peo­ple gen­er­ally have difficulty with, when they have trou­ble ac­cept­ing sub­jec­tivism.

• Doesn’t it mean the same thing in ei­ther case? Either way, I don’t know which way the coin will land or has landed, and I have some odds at which I’ll be will­ing to make a bet. I don’t see the prob­lem.

(Though my will­ing­ness to bet at all will gen­er­ally go down over time in the “already flipped” case, due to the in­creas­ing pos­si­bil­ity that who­ever is offer­ing the bet some­how looked at the coin in the in­ter­ven­ing time.)

• The differ­ence is (to the naive view; I don’t nec­es­sar­ily en­dorse it) that in the case where the coin has landed, I do not know how it landed, but there’s a sense in which I could, in the­ory, know; there is, in any case, some­thing to know; there is a fact of the mat­ter about how the coin has landed, but I do not know that fact. So the “prob­a­bil­ity” of it hav­ing landed heads, or tails—the un­cer­tainty—is, in­deed, en­tirely in my mind.

But in the case where the coin has yet to be tossed, there is as yet no case of the mat­ter about whether it’s heads or tails! I don’t know whether it’ll land heads or tails, but nor could I know; there’s noth­ing to know! (Or do you say the fu­ture is pre­de­ter­mined?—asks the naive in­ter­locu­tor—Else how else may one talk about prob­a­bil­ity be­ing merely “in the mind”, for some­thing which has not hap­pened yet?)

What­ever the an­swers to these ques­tions may be, they are cer­tainly not ob­vi­ous or sim­ple an­swers… and that is my ob­jec­tion to the OP: that it at­tempts to pass off a difficult and con­fus­ing con­cep­tual ques­tion as a sim­ple and ob­vi­ous one, thereby failing to do jus­tice to those who find it con­fus­ing or difficult.

• the coin is already heads or tails, no mat­ter that I don’t know which it is

• He never said “will land heads”, though. He just said “a flipped coin has a chance of land­ing heads”, which is not a time­ful state­ment. EDIT: no longer con­fi­dent that this is the case

Didn’t the post already counter your sec­ond para­graph? The sub­jec­tive in­ter­pre­ta­tion can be a su­per­set of the propen­sity in­ter­pre­ta­tion.

• Ac­tu­ally, the as­sign­ment of prob­a­bil­ity 1 to an event that has hap­pened is also sub­jec­tive. You don’t know that it had to oc­cur with com­plete in­evita­bil­ity, ie you don’t know that it had a con­di­tional prob­a­bil­ity of 1 rel­a­tive to the pre­ced­ing state of the uni­verse. You are set­ting it to 1 be­cause it is a given as far as you are concerned

• The ques­tion is not “what is the prob­a­bil­ity that the coin would have landed heads”. The ques­tion is, “what is the prob­a­bil­ity that the coin has in fact landed heads”!

• If you are in­ter­ested in the ob­jec­tive prob­a­bil­ity of the coin flip,the it only has one value be­cause it is only one event. In a de­ter­minis­tic uni­verse the ob­jec­tive prob­a­bil­ity is 1, in a suit­ably in­de­ter­minis­tic uni­verse it is always 0.5.

If you think the ques­tions “what will it be” and “what was it” are differ­ent, you are deal­ing with sub­jec­tive prob­a­bil­ity, be­cause the differ­ence the pas­sage of time makes is a differ­ence in the in­for­ma­tion available to you, the sub­ject.

Failing to dis­t­in­guish ob­jec­tive and sub­jec­tive prob­a­bil­ity leads to con­fu­sion. For in­stance, the sleep­ing beauty para­dox is only a para­dox if you ex­pect all ob­servers to calcu­late the same prob­a­bil­ity de­spite the differ­ent in­for­ma­tion available to them.

• Fre­quen­tist statis­tics were in­vented in a (failed) at­tempt to keep sub­jec­tivity out of sci­ence in a time be­fore hu­man­ity re­ally un­der­stood the laws of prob­a­bil­ity theory

I’m a Bayesian, but do you have a source for this claim? It was my un­der­stand­ing that Fre­quen­tism was mostly pro­moted by Ron Fisher in the 20th cen­tury, well af­ter the work of Bayes.

Syn­the­sised from Wikipe­dia:

While the first cited fre­quen­tist work (the weak law of large num­bers, 1713, Ja­cob Bernoulli, Fre­quen­tist prob­a­bil­ity) pre­dates Bayes’ work (ed­ited by Price in 1763, Bayes’ The­o­rem), it’s not by much. Fur­ther, ac­cord­ing to the ar­ti­cle on “Fre­quen­tist Prob­a­bil­ity”, “[Bernoulli] is also cred­ited with some ap­pre­ci­a­tion for sub­jec­tive prob­a­bil­ity (prior to and with­out Bayes the­o­rem).”

The ones that pushed fre­quen­tism in or­der to achieve ob­jec­tivity were Fisher, Ney­man and Pear­son. From “Fre­quen­tist prob­a­bil­ity”: “All val­ued ob­jec­tivity, so the best in­ter­pre­ta­tion of prob­a­bil­ity available to them was fre­quen­tist”. Fisher did other nasty things, such as us­ing the fact that causal­ity is re­ally hard to soundly es­tab­lish to ar­gue that to­bacco was not proven to cause can­cer. But noth­ing in­di­cates that this was done out of not un­der­stand­ing the laws of prob­a­bil­ity the­ory.

AI sci­en­tists use the Bayesian interpretation

Some­times yes, some­times not. Even Bayesian AI sci­en­tists use fre­quen­tist statis­tics pretty of­ten.

This post makes it sound like fre­quen­tism is use­less and that is not true. The con­cepts of: a stochas­tic es­ti­ma­tor for a quan­tity, and look­ing at whether it is bi­ased, and its var­i­ance; were de­vel­oped by fre­quen­tists to look at real world data. AI sci­en­tists use it to analyse al­gorithms like gra­di­ent de­scent, or ap­prox­i­mate Bayesian in­fer­ence schemes, but the tools are definitely use­ful.

• There are also two schools of bayesian think­ing: “It is pop­u­lar to di­vide Bayesi­ans into two main cat­e­gories, “ob­jec­tive” and “sub­jec­tive” Bayesi­ans. The di­vide is some­times made for­mal, there are con­fer­ences la­bel­led as one but not the other, for ex­am­ple.

A car­i­ca­ture of sub­jec­tive Bayes is that all prob­a­bil­ities are just opinion, and the best we can do with an opinion is make sure it isn’t self con­tra­dic­tory, and satis­fy­ing the rules of prob­a­bil­ity is a way of en­sur­ing that. A car­i­ca­ture of ob­jec­tive Bayes is that there ex­ists a cor­rect prob­a­bil­ity for ev­ery hy­poth­e­sis given cer­tain in­for­ma­tion, and that differ­ent peo­ple with the same in­for­ma­tion should make ex­actly the same prob­a­bil­ity judg­ments.”

• The sub­jec­tivist in­ter­pre­ta­tion of prob­a­bil­ity is com­mon [in] … com­mon in­tu­ition (where peo­ple feel like they can say there’s a 30% chance of rain to­mor­row with­out wor­ry­ing about the fact that to­mor­row only hap­pens once)

Why do you say that “there’s a 30% chance of rain to­mor­row” is an ex­am­ple of the sub­jec­tive in­ter­pre­ta­tion? Isn’t it just as read­ily in­ter­preted as say­ing “on 30% of all days similar to this one [in me­te­o­rolog­i­cal con­di­tions, etc.], it rains”?

Be­sides, “this coin flip that I am go­ing to do right now” only hap­pens once, too (any sub­se­quent coin flips will be other, differ­ent, coin flips, and not that spe­cific coin flip). Surely you don’t con­clude from this that when some­one says “this coin has a 50% chance of com­ing up heads”, it means they’re tak­ing the sub­jec­tivist view of the coin’s be­hav­ior?

• There are 0 other days “similar to” this one in Earth’s his­tory, if “similar to” is strict enough (e.g. the ex­act pat­tern of tem­per­a­ture over time, cloud pat­terns, etc). You’d need a pre­cise, more per­mis­sive defi­ni­tion of “similar to” for the state­ment to be mean­ingful.

• But the same is true of coin flips.

• When you say “all days similar to this one”, are you talk­ing about all real days or all pos­si­ble days? If it’s “all pos­si­ble days”, then this seems like sum­ming over the mea­sures of all pos­si­ble wor­lds com­pat­i­ble with both your ex­pe­riences and the hy­poth­e­sis, and di­vid­ing by the sum of the mea­sures of all pos­si­ble wor­lds com­pat­i­ble with your ex­pe­riences. (Un­der this in­ter­pre­ta­tion, jes­si­cata’s re­sponse doesn’t make much sense; “similar to” means “ob­ser­va­tion­ally equiv­a­lent for ob­servers with as much in­for­ma­tion as I have”, and doesn’t have a free vari­able.)

• I don’t think us­ing like­li­hoods when pub­lish­ing in jour­nals is tractable.

1. Where did your pri­ors come from? What if other sci­en­tists have differ­ent pri­ors? Jus­tify­ing the cho­sen prior seems difficult.

2. Where did your like­li­hood ra­tios come from? What if other sci­en­tists dis­agree.

P val­ues may bave been a failed at­tempt at ob­jec­tivity, but they’re a bet­ter at­tempt than mov­ing to­wards sub­jec­tive prob­a­bil­ities (even though the lat­ter is more cor­rect).