OK. I think we may agree on the technical points. The issue may be with the use of the word “Bayesian”.
Me: But they aren’t guaranteed to eventually get a Bayesian to think the null hypothesis is likely to be false, when it is actually true.
You: Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
I’m correct, by the usual definition of “Bayesian”, as someone who does inference by combining likelihood and prior. Bayesians always have more than one hypothesis (outside trivial situations where everything is known with certainty), with priors over them. In the example I gave, one can find a b such that the likelihood ratio with 0.5 is large, but the set of such b values will likely have low prior probability, so the Bayesian probably isn’t fooled. In contrast, a frequentist “pure significance test” does involve only one explicit hypothesis, though the choice of test statistic must in practice embody some implicit notion of what the alternative might be.
Beyond this, I’m not really interested in debating to what extent Yudkowsky did or did not understand all nuances of this problem.
A platonically perfect Bayesian given complete information and with accurate priors cannot be substantially fooled. But once again this is true regardless of whether I report p-values or likelihood ratios. p-values are fine.
OK. I think we may agree on the technical points. The issue may be with the use of the word “Bayesian”.
Me: But they aren’t guaranteed to eventually get a Bayesian to think the null hypothesis is likely to be false, when it is actually true.
You: Importantly, this is false! This statement is wrong if you have only one hypothesis rather than two.
I’m correct, by the usual definition of “Bayesian”, as someone who does inference by combining likelihood and prior. Bayesians always have more than one hypothesis (outside trivial situations where everything is known with certainty), with priors over them. In the example I gave, one can find a b such that the likelihood ratio with 0.5 is large, but the set of such b values will likely have low prior probability, so the Bayesian probably isn’t fooled. In contrast, a frequentist “pure significance test” does involve only one explicit hypothesis, though the choice of test statistic must in practice embody some implicit notion of what the alternative might be.
Beyond this, I’m not really interested in debating to what extent Yudkowsky did or did not understand all nuances of this problem.
A platonically perfect Bayesian given complete information and with accurate priors cannot be substantially fooled. But once again this is true regardless of whether I report p-values or likelihood ratios. p-values are fine.