It’s valuable to know what can happen under adversarial assumptions even if you don’t expect those assumptions to hold.
That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.
This isn’t strong evidence; you’re mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).
Tweaked the wording, is that better? (“Compatible” was a weasel word anyway.)
Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don’t specifically require them is weak.
I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don’t explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn’t evolve to cope with. See, for example, Cosmides and Tooby 1996.
That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.
Tweaked the wording, is that better? (“Compatible” was a weasel word anyway.)
I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don’t explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn’t evolve to cope with. See, for example, Cosmides and Tooby 1996.
Thanks. I was not familiar with that hypothesis, will have to look at C&T’s paper.