“when, if ever, our credences ought to capture indeterminacy in how we weigh up considerations/evidence”
The obvious answer is only when there is enough indeterminacy to matter; I’m not sure if anyone would disagree. Because the question isn’t whether there is indeterminacy, it’s how much, and whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way.
I’d be surprised if many/most infra-Bayesians would endorse suspending judgment in the motivating example in this post
You also didn’t quite endorse suspending judgement in that case—“If someone forced you to give a best guess one way or the other, you suppose you’d say “decrease”. Yet, this feels so arbitrary that you can’t help but wonder whether you really need to give a best guess at all…” So, yes, if it’s not directly decision relevant, sure, don’t pick, say you’re uncertain. Which is best practice even if you use precise probability—you can have a preference for robust decisions, or a rule for withholding judgement when your confidence is low. But if it is decision relevant, and there is only a binary choice available, your best guess matters. And this is exactly why Eliezer says that when there is a decision, you need to focus your indeterminacy, and why he was dismissive of DS and similar approaches.
The obvious answer is only when there is enough indeterminacy to matter; I’m not sure if anyone would disagree. Because the question isn’t whether there is indeterminacy, it’s how much, and whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way.
Based on this I think you probably mean something different by “indeterminacy” than I do (and I’m not sure what you mean). Many people in this community explicitly disagree with the claim that our beliefs should be indeterminate at all, as exemplified by the objections I respond to in the post.
When you say “whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way”, I don’t know what “costs” you mean, or what non-question-begging standard you’re using to judge whether “doing it the Bayesian way” would be better. As I write in the “Background” section: “And it’s question-begging to claim that certain beliefs “outperform” others, if we define performance as leading to behavior that maximizes expected utility under those beliefs. For example, it’s often claimed that we make “better decisions” with determinate beliefs. But on any way of making this claim precise (in context) that I’m aware of, “better decisions” presupposes determinate beliefs!”
You also didn’t quite endorse suspending judgement in that case—“If someone forced you to give a best guess one way or the other, you suppose you’d say “decrease”.
The quoted sentence is consistent with endorsing suspending judgment, epistemically speaking. As the key takeaways list says, “If you’d prefer to go with a given estimate as your “best guess” when forced to give a determinate answer, thatdoesn’t imply this estimate should be your actual belief.”
But if it is decision relevant, and there is only a binary choice available, your best guess matters
I address this in the “Practical hallmarks” section — what part of my argument there do you disagree with?
The obvious answer is only when there is enough indeterminacy to matter; I’m not sure if anyone would disagree. Because the question isn’t whether there is indeterminacy, it’s how much, and whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way.
You also didn’t quite endorse suspending judgement in that case—“If someone forced you to give a best guess one way or the other, you suppose you’d say “decrease”. Yet, this feels so arbitrary that you can’t help but wonder whether you really need to give a best guess at all…” So, yes, if it’s not directly decision relevant, sure, don’t pick, say you’re uncertain. Which is best practice even if you use precise probability—you can have a preference for robust decisions, or a rule for withholding judgement when your confidence is low. But if it is decision relevant, and there is only a binary choice available, your best guess matters. And this is exactly why Eliezer says that when there is a decision, you need to focus your indeterminacy, and why he was dismissive of DS and similar approaches.
Based on this I think you probably mean something different by “indeterminacy” than I do (and I’m not sure what you mean). Many people in this community explicitly disagree with the claim that our beliefs should be indeterminate at all, as exemplified by the objections I respond to in the post.
When you say “whether it’s worth the costs of using a more complex model instead of doing it the Bayesian way”, I don’t know what “costs” you mean, or what non-question-begging standard you’re using to judge whether “doing it the Bayesian way” would be better. As I write in the “Background” section: “And it’s question-begging to claim that certain beliefs “outperform” others, if we define performance as leading to behavior that maximizes expected utility under those beliefs. For example, it’s often claimed that we make “better decisions” with determinate beliefs. But on any way of making this claim precise (in context) that I’m aware of, “better decisions” presupposes determinate beliefs!”
The quoted sentence is consistent with endorsing suspending judgment, epistemically speaking. As the key takeaways list says, “If you’d prefer to go with a given estimate as your “best guess” when forced to give a determinate answer, that doesn’t imply this estimate should be your actual belief.”
I address this in the “Practical hallmarks” section — what part of my argument there do you disagree with?