I have to admit I don’t get it. I mean, you can’t just deny that probability estimates are a thing. How do decision theories (or just decision mechanisms) work in a Fallibilist worldview? What does it mean, technically, for a theory to become “less wrong” over time? What are the mechanics (what changes in one’s worldview) when we notice and eliminate an error in a theory?
Your description of infinite possibilities makes me think you don’t understand the difference between “infinite” and “very large and not fully known”. And I wonder if you acknowledge that one’s potential future experiences are NOT infinite, but are still very hard to predict and unknown in scope, and that Bayesean probabilities work just fine for it—include an assignment for “something else”. Bayesean probabilities are not true, they’re personal estimates/assignments of future experiences. And they’re the best thing we have for making decisions.
I have to admit I don’t get it. I mean, you can’t just deny that probability estimates are a thing. How do decision theories (or just decision mechanisms) work in a Fallibilist worldview? What does it mean, technically, for a theory to become “less wrong” over time? What are the mechanics (what changes in one’s worldview) when we notice and eliminate an error in a theory?
Your description of infinite possibilities makes me think you don’t understand the difference between “infinite” and “very large and not fully known”. And I wonder if you acknowledge that one’s potential future experiences are NOT infinite, but are still very hard to predict and unknown in scope, and that Bayesean probabilities work just fine for it—include an assignment for “something else”. Bayesean probabilities are not true, they’re personal estimates/assignments of future experiences. And they’re the best thing we have for making decisions.