Probably the most damning criticism you’ll find, curi, is that fallibilism isn’t useful to the Bayesian.
The fundamental disagreement here is somewhere in the following statement:
“There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true.”
I suspect your disagreement is in one of several places.
1) You disagree that there even exist epistemically “true” facts.
2) That we can determine how likely something is to be true.
or
3) That likelihood of being true (as defined by us) is reason to believe the truth of something.
I can actually flesh out your objections to all of these things.
For 1, you could probably successfully argue that we aren’t capable of determining if we’ve ever actually arrived at a true epistemic statement because real certainty doesn’t exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God—i.e. shaky to the point of not concerning oneself with them all together.
2 basically ties in with the above directly.
3 is a whole ’nother ball game, and I don’t think it’s really been broached yet by anyone, but it’s certainly a valid point of contention. I’ll leave it out unless you’d like to pursue it.
The Bayesian counter to all of these is simply, “That doesn’t really do anything for me.”
Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.
That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren’t-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.
Basically, what I’m trying to say is that all you’re ever going to get out of a Bayesian is, “No, I disagree. I think we can have certainty.” And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.
You’ve already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren’t trying to find certainty, just say we’re trying to minimize criticism.
This probably hasn’t been a very satisfying answer. I certainly agree it’s useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don’t think there’s anything I’m absolutely certain about, because to claim so would be silly.
Small nitpick: I don’t like your use of the word ‘certainty’ here. Especially in philosophy, it has too much of a connotation of “literally impossible for me to be wrong” rather than “so ridiculously unlikely that I’m wrong that we can just ignore it”, which may cause confusion.
Where don’t you like it? I don’t think anyone actually argues for your first definition, because, like I said, it’s silly. I think curi’s point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
I interpreted this as the first definition. I guess we should see what curi says.
Probably the most damning criticism you’ll find, curi, is that fallibilism isn’t useful to the Bayesian.
The fundamental disagreement here is somewhere in the following statement:
“There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true.”
I suspect your disagreement is in one of several places.
1) You disagree that there even exist epistemically “true” facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something.
I can actually flesh out your objections to all of these things.
For 1, you could probably successfully argue that we aren’t capable of determining if we’ve ever actually arrived at a true epistemic statement because real certainty doesn’t exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God—i.e. shaky to the point of not concerning oneself with them all together.
2 basically ties in with the above directly.
3 is a whole ’nother ball game, and I don’t think it’s really been broached yet by anyone, but it’s certainly a valid point of contention. I’ll leave it out unless you’d like to pursue it.
The Bayesian counter to all of these is simply, “That doesn’t really do anything for me.”
Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.
That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren’t-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.
Basically, what I’m trying to say is that all you’re ever going to get out of a Bayesian is, “No, I disagree. I think we can have certainty.” And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.
You’ve already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren’t trying to find certainty, just say we’re trying to minimize criticism.
This probably hasn’t been a very satisfying answer. I certainly agree it’s useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don’t think there’s anything I’m absolutely certain about, because to claim so would be silly.
Small nitpick: I don’t like your use of the word ‘certainty’ here. Especially in philosophy, it has too much of a connotation of “literally impossible for me to be wrong” rather than “so ridiculously unlikely that I’m wrong that we can just ignore it”, which may cause confusion.
Where don’t you like it? I don’t think anyone actually argues for your first definition, because, like I said, it’s silly. I think curi’s point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.
My point is that the things we are “certain” about (as per your second definition) probably coincide almost exactly with “statements without criticism” as per curi’s definition(s).
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.
I interpreted this as the first definition. I guess we should see what curi says.
people genrally try to have their cake and eat it: they want certainty to mean “cannot be wrong”, but only on the basis that they feel sure.