I don’t think this is a fallacy. If it was, one of the most powerful and common informal inference forms (IBE a.k.a. Inference to the Best Explanation / abduction) would be inadmissible. That would be absurd. Let me elaborate.
IBE works by listing all the potential explanations that come to mind, subjectively judging how good they are (with explanatory virtues like simplicity, fit, internal coherence, external coherence, unification, etc) and then inferring that the best explanation is probably correct. This involves the assumption that the probability is small that the true explanation is not among those which were considered. Sometimes this assumption seems unreasonable, in which case IBE shouldn’t be applied. That’s mostly the case if all considered explanations seem bad.
However, in many cases the “grain of truth” assumption (the true explanation is within the set of considered explanations) seems plausible. For example, I observe the door isn’t locked. By far the best (least contrived) explanation I can think of seems to be that I forgot to lock it. But of course there is a near infinitude of explanations I didn’t think of, so who is to say there isn’t an unknown explanation which is even better than the one about my forgetfulness? Well, it just seems unlikely that there is such an explanation.
And IBE isn’t just applicable to common everyday explanations. For example, the most common philosophical justification that the external world exists is an IBE. The best explanation for my experience of a table in front of me seems to be that there is a table in front of me. (Which interacts with light, which hits my eyes, which I probably also have, etc.)
Of course, in other cases, applications of IBE might be more controversial. However, in practice, if Alice makes an argument based on IBE, and Bob disagrees with its conclusion, this is commonly because Bob thinks Alice made a mistake when judging which of the explanations she considered is the best. In which case Bob can present reasons which suggest that, actually, explanation x is better than explanation y, contrary to what Alice assumed. Alice might be convinced by these reasons, or not, in which case she can provide the reasons why she still believes that y is better than x, and so on.
In short, in many or even most cases where someone disagrees with a particular application of IBE, their issue is not with IBE itself, but what the best explanation is. Which suggests the “grain of truth” assumption is often reasonable.
Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen
Well, that’s clearly almost always impossible (there are almost infinitely many possible explanations for almost anything), so we can’t make an exhaustive list. Moreover, “should” implies “can”, so, by contraposition, if we can’t list them, it’s not the case that we should list them.
, or at least manage to grapple with most of the probability mass.
But that’s backwards. IBE is a method which assigns probability to the best explanation based on how good it is (in terms of explanatory virtues) and based on being better than the other considered explanations. So IBE is a specific method for coming up with probabilities. It’s not just stating your prior. You can’t argue about purely subjective priors (that would be like arguing about taste) but you can make arguments about what makes some particular explanation good, or bad, or better than others. And if you happen to think that the “grain of truth” assumption is not plausible for a particular argument, you can also state that. (Though the fact that this is rather rarely done in practice suggests it’s in general not such a bad assumption to make.)
I don’t think this is a fallacy. If it was, one of the most powerful and common informal inference forms (IBE a.k.a. Inference to the Best Explanation / abduction) would be inadmissible. That would be absurd. Let me elaborate.
IBE works by listing all the potential explanations that come to mind, subjectively judging how good they are (with explanatory virtues like simplicity, fit, internal coherence, external coherence, unification, etc) and then inferring that the best explanation is probably correct. This involves the assumption that the probability is small that the true explanation is not among those which were considered. Sometimes this assumption seems unreasonable, in which case IBE shouldn’t be applied. That’s mostly the case if all considered explanations seem bad.
However, in many cases the “grain of truth” assumption (the true explanation is within the set of considered explanations) seems plausible. For example, I observe the door isn’t locked. By far the best (least contrived) explanation I can think of seems to be that I forgot to lock it. But of course there is a near infinitude of explanations I didn’t think of, so who is to say there isn’t an unknown explanation which is even better than the one about my forgetfulness? Well, it just seems unlikely that there is such an explanation.
And IBE isn’t just applicable to common everyday explanations. For example, the most common philosophical justification that the external world exists is an IBE. The best explanation for my experience of a table in front of me seems to be that there is a table in front of me. (Which interacts with light, which hits my eyes, which I probably also have, etc.)
Of course, in other cases, applications of IBE might be more controversial. However, in practice, if Alice makes an argument based on IBE, and Bob disagrees with its conclusion, this is commonly because Bob thinks Alice made a mistake when judging which of the explanations she considered is the best. In which case Bob can present reasons which suggest that, actually, explanation x is better than explanation y, contrary to what Alice assumed. Alice might be convinced by these reasons, or not, in which case she can provide the reasons why she still believes that y is better than x, and so on.
In short, in many or even most cases where someone disagrees with a particular application of IBE, their issue is not with IBE itself, but what the best explanation is. Which suggests the “grain of truth” assumption is often reasonable.
Well, that’s clearly almost always impossible (there are almost infinitely many possible explanations for almost anything), so we can’t make an exhaustive list. Moreover, “should” implies “can”, so, by contraposition, if we can’t list them, it’s not the case that we should list them.
But that’s backwards. IBE is a method which assigns probability to the best explanation based on how good it is (in terms of explanatory virtues) and based on being better than the other considered explanations. So IBE is a specific method for coming up with probabilities. It’s not just stating your prior. You can’t argue about purely subjective priors (that would be like arguing about taste) but you can make arguments about what makes some particular explanation good, or bad, or better than others. And if you happen to think that the “grain of truth” assumption is not plausible for a particular argument, you can also state that. (Though the fact that this is rather rarely done in practice suggests it’s in general not such a bad assumption to make.)