Bayesianism defines probability in terms of belief. Frequentism defines probability as a statement about the world’s true probabiliity. Saying “[t]he Frequentist now believes” is therefore asking for a Frequentist’s Bayesian probability.

Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don’t understand what the 95% means. I don’t understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.

In my terms, when you say that a Bayesian wouldn’t bet $1bil:$1 that the sun will rise tomorrow, that doesn’t seem correct to me. It’s true that I wouldn’t actually make that nightly bet, because the risk free rate is like 3% per annum so it’d be a pretty terrible allocation of risk, plus it seems like it’d be an assassination market on the rotation of Earth and I don’t like incentivizing that as a matter of course. But does the math of likelihood ratios not work as well to bury bad theories under a mountain of evidence?

I think not assigning 1e-40 chance to an event is an epistemological choice separate from Bayesianism. The math seems quite capable of leading to that conclusion, and recovering from that state quickly enough.

I think maybe the crux is “There is no way for a Bayesian to be wrong. Everything is just an update. But a Frequentist who said the die was fair can be proven wrong to arbitrary precision.” You can, if the Bayesian announces their prior, know precisely how much of your arbitrary evidence they will require to believe the die is loaded.

Again, I hope this is taken in the spirit I mean it, which is “you are the only self proclaimed Frequentist on this board I know of, so you are a very valuable source of epistemic variation that I should learn how to model”.

Bayesianism defines probability in terms of belief. Frequentism defines probability as a statement about the world’s true probabiliity. Saying “[t]he Frequentist now believes” is therefore asking for a Frequentist’s Bayesian probability.

Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don’t understand what the 95% means. I don’t understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.

In my terms, when you say that a Bayesian wouldn’t bet $1bil:$1 that the sun will rise tomorrow, that doesn’t seem correct to me. It’s true that I wouldn’t actually make that nightly bet, because the risk free rate is like 3% per annum so it’d be a pretty terrible allocation of risk, plus it seems like it’d be an assassination market on the rotation of Earth and I don’t like incentivizing that as a matter of course. But does the math of likelihood ratios not work as well to bury bad theories under a mountain of evidence?

I think not assigning 1e-40 chance to an event is an epistemological choice separate from Bayesianism. The math seems quite capable of leading to that conclusion, and recovering from that state quickly enough.

I think maybe the crux is “There is no way for a Bayesian to be wrong. Everything is just an update. But a Frequentist who said the die was fair can be proven wrong to arbitrary precision.” You can, if the Bayesian announces their prior, know precisely how much of your arbitrary evidence they will require to believe the die is loaded.

Again, I hope this is taken in the spirit I mean it, which is “you are the only self proclaimed Frequentist on this board I know of, so you are a very valuable source of epistemic variation that I should learn how to model”.