There are two common types of strawmen arguments that I’ve encountered within this debate.
One is the strawman argument that Bayesians typically give against frequentists, where they show how a particular frequentist test gives the wrong answer on a particular problem, but a straightforward application of Bayes theorem gives the right answer. Frequentists easily counter that a wiser frequentist would have used a different test for this problem that gives the right answer.
The other strawman argument is the one anti-Bayesians make, where they chastise Bayesians for claiming they have the complete theory of rationality / epistemology and no more work needs to be done. This is obviously false, since no Bayesian has ever claimed this, not even Jaynes. A complete theory would need ways to represent hypotheses, and ways to generate them, and the axioms of probability do not make any additional assumptions about what a hypothesis is.
I’m still looking for a well posed inference problem, where a straightforward application of Bayesian principles gives the wrong answer, but a straightforward application of a different set of principles gets the right answer.
This seems a bit motte-and-bailey. In your post, you argue for Bayesianism as a theory of reasoning. Of course you can say that problems that you can’t solve well with Bayesianism aren’t well posed inference problems. Unfortunately, nature doesn’t care about posing well posed inference problems.
Even if Bayesianism is better for a small subject of reasoning problems that doesn’t imply that it’s good to reject tool-boxism.
What you have there is a defence of the Jaynesian variety, but Yudkowsky is making much stronger claims. For instance he thinks Bayes can replace science, but you can’t replace science with inference alone.
Also, if Bayes is inference alone, it can’t be the sole basis of intelligence.
There are two common types of strawmen arguments that I’ve encountered within this debate.
One is the strawman argument that Bayesians typically give against frequentists, where they show how a particular frequentist test gives the wrong answer on a particular problem, but a straightforward application of Bayes theorem gives the right answer. Frequentists easily counter that a wiser frequentist would have used a different test for this problem that gives the right answer.
The other strawman argument is the one anti-Bayesians make, where they chastise Bayesians for claiming they have the complete theory of rationality / epistemology and no more work needs to be done. This is obviously false, since no Bayesian has ever claimed this, not even Jaynes. A complete theory would need ways to represent hypotheses, and ways to generate them, and the axioms of probability do not make any additional assumptions about what a hypothesis is.
I’m still looking for a well posed inference problem, where a straightforward application of Bayesian principles gives the wrong answer, but a straightforward application of a different set of principles gets the right answer.
This seems a bit motte-and-bailey. In your post, you argue for Bayesianism as a theory of reasoning. Of course you can say that problems that you can’t solve well with Bayesianism aren’t well posed inference problems. Unfortunately, nature doesn’t care about posing well posed inference problems.
Even if Bayesianism is better for a small subject of reasoning problems that doesn’t imply that it’s good to reject tool-boxism.
Yep. If Bayes only does one thing. you need other tools to do the other jobs. Which, by the way, implies nothing about converging, or not, on truth.
Bayesian has more than on or meaning.
What you have there is a defence of the Jaynesian variety, but Yudkowsky is making much stronger claims. For instance he thinks Bayes can replace science, but you can’t replace science with inference alone.
Also, if Bayes is inference alone, it can’t be the sole basis of intelligence.