Forty roles gives you 0.9999999999999999999999999999999999999999, a confidence so high it is forbidden to a Bayesian under any circumstances.
Why? You have explicitly assumed a prior distribution:
You have two competing hypotheses:
The ten-sided die is fair. It produces the numbers {1,2,3,4,5,6,7,8,9,10} with equal probability.
The ten-sided die is weighted. It always produces the number 10.
Actually, you have not specified what probabilities the Bayesian is assigning to these two. Suppose for the moment that it is 50-50.
It is straightforward to combine this with the observed die rolls according to the Bayesian calculation. If the Bayesian is not willing to bet at the resulting odds, that only means that that was not his prior. Bayes’ Theorem does not depend on anyone’s actual belief about anything.
You omit to say how the Frequentist “bur[ies] wrong beliefs under a mountain of data”. Presumably you believe that the data in this example has done so. But what is the Frequentist’s argument? “Lookit this mountain of data!”? Actual papers doing statistical reasoning never have such mountains, especially in reproducibility-crisis fields.
Why? You have explicitly assumed a prior distribution:
Actually, you have not specified what probabilities the Bayesian is assigning to these two. Suppose for the moment that it is 50-50.
It is straightforward to combine this with the observed die rolls according to the Bayesian calculation. If the Bayesian is not willing to bet at the resulting odds, that only means that that was not his prior. Bayes’ Theorem does not depend on anyone’s actual belief about anything.
You omit to say how the Frequentist “bur[ies] wrong beliefs under a mountain of data”. Presumably you believe that the data in this example has done so. But what is the Frequentist’s argument? “Lookit this mountain of data!”? Actual papers doing statistical reasoning never have such mountains, especially in reproducibility-crisis fields.