Simplicity and consistency are then factors that positively affect the probability of a belief being true, but they are not in themselves determinants of the “best” explanation
If that is so, you no longer have an argument that abduction is merely a form of induction, because you have admitted to two sources of probability other than induction.
The underlying issue is that what we are trying to do with abduction is find the hidden mechanism behind the directly observable, the force of gravity that makes the apple fall. Since induction is limited to inferring futures observations from past ones, it is limited to the observable and silent about behind-the-scenes mechanisms. And so it is limited compared to abduction, and so abduction is not a form of induction. ( The classic argument against induction is that it is not a form of deduction...within classical loguic. It could still be a form of probabilistic reasoning).
Bayes allow you to confirm hypotheses that would generate the observable evidence, but doesn’t mechanically generate them for you , and also.doesn’t allow you to distinguish equally predictive ones. You can solve the first problem by creatively positing hypotheses, and the second with the criteria of simplicity and consistency. That gives you full abductive reasoning . Bayes is a subset of full abductive reasoning.
Science uses abductive reasoning , plus (dis)confirmation. Deduction is needed for these, because you need to deduce the expected consequences of a theory to observe them.
Assuming the probabilities are correct, I think Belief A should be considered the “best”. Would you agree with this?
I would need to know where you are getting your likelihoods from. Are they hard observational data, or subjective priors?
Quantum field theory is an example of a very complex theory and inconsistent with other accepted theories (e.g. General Relativity), but still the “best” explanation for empirical evidence
Simplicity is a relative measure, not an absolute. QFT can both a complex theory, and the simplest that does the job.
This reasoning is okay in our world, because we have the prior that among diseases that have these three symptoms, flu is most frequent. But this is an extra information that is not included in the quote.
Yes, we need to know where the likelihood is coming from.
Probabilistic reasoning is also induction, so long as induction isn’t required to be certain: observing multiple instances of something in the past raises the probability that it occur again in the future.
this case, my own (not at all formal, just my sense) understanding is that while induction and deduction are largely about interpreting and reasoning regarding facts/data, abduction is largely about applying, proposing, and evaluating models and heuristics
Not really. they have standard definitons you can just look up. You don’t have to guess.
Bayes factors come from metaphysical assumptions and concept groupings
Well, Bayesian updates are no good unless you have the right hypothesis. That might be what you mean.
I find it useful to think in terms of failure modes: inductive reasoning is the sort of reasoning that tends to fail because you overestimated the strength of the existing evidence, abductive reasoning is the sort of reasoning that tends to fail because you didn’t evaluate the direction of the existing evidence well (maybe you over-weighted something, maybe you’re missing a possible hypothesis and hence a direction in possibility space, etc.
It’s naive to assume that explanations are suggested by the data. They are conjectured.
I appreciate that point, yes, and I have looked up standard definitions. I’m probably not looking in the right places, though, because they ones I have found are either too vague and imprecise for me to make sense of, or focus on generating hypotheses/explanations/models. If you do have a good source for a better explanation, I’d actually really like to learn more.
My argument was never that abduction is a subset of induction, but that it can always be replaced by a combination of deduction and induction.
The effect of simplicity and consistency on probabilities can both be classified as deductions:
the former as a correct application of probabilistic logic (every additional assumption reduces the probability of the conclusion being true due to probability product)
the latter as a correct application of Bayes’ theorem or other types of logic (when [belief A] implies [not belief B])
The underlying issue is that what we are trying to do with abduction is find the hidden mechanism behind the directly observable, the force of gravity that makes the apple fall. Since induction is limited to inferring futures observations from past ones, it is limited to the observable and silent about behind-the-scenes mechanisms. And so it is limited compared to abduction, and so abduction is not a form of induction.
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence. So it both improves your existing models and makes you choose better models (aka new “mechanisms” as you call them).
(This is a Machine Learning friendly way to see induction; more generally, you could consider any model-parameter combination as a separate model.)
Bayes allow you to confirm hypotheses that would generate the observable evidence, but doesn’t mechanically generate them for you , and also.doesn’t allow you to distinguish equally predictive ones. You can solve the first problem by creatively positing hypotheses, and the second with the criteria of simplicity of and consistency. That gives you full abductive reasoning . Bayes is a subset of full abductive reasoning.
I also don’t think this is quite correct. Simplicity and consistency should be considered evidence in your application of Bayes’ theorem. Namely, Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs. (Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction. Whether you think Bayes Theorem itself fits under induction or deduction, I don’t think most people would consider it abduction).
Besides this, if I understand correctly, you are proposing that the core of abduction is about generating hypotheses rather than evaluating the evidence for or against them. This I find intriguing.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience.
I do wonder whether this still happens subconsciously or whether hypothesis search constitutes in some way its own form of reasoning. But I’m afraid I haven’t thought enough about this, so I won’t be able to argue about it.
Thank you for the inspiration though, quite useful.
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence
Is it? That isn’t the classic definition. The classic definition is fairly limited, more like:-
Example: “For the past 7 days it has been raining. Therefore, tomorrow it will probably also rain.”
Ie, just more of the same , not an infinite variety of models
It does sounds like Bayes,...but Bayes could be a superset of induction.
And where are you getting your models from? If you are creating them , that’s abduction, even if you are calling it induction. If they they are already there, in some oracular database, that’s uncomputable ideal reasoning.
This is a Machine Learning friendly way to see induction
Or it’s something in ML that has been mislabelled “induction”, like “hallucination”
Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs
Complete what? It isn’t complete epistemology as I have shown.
(Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction
What does “associated with” mean? If inferential stats is a superset of induction it’s pretty unsurprising that it could contain abduction. If your models are being created on the fly, it actually does.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience
Indeed. If ideal Bayesians don’t need abduction, that doesn’t mean humans don’t.
focus on generating hypotheses/explanations/models
Why is that a bad thing?
That implies different permissible levels of making and breaking assumptions, choosing and changing models. It’s more fluid, less rule-bound, more willing to accept being knowingly wrong in some ways, less tied to formalisms and precise methods.
Yes. Hypothesis generation isn’t mechanical or algorthmic. That may be “bad”, but there’s not much alternative—you can’t actually use Solomonoff Induction, or whatever.
Well, Bayesian updates are no good unless you have the right hypothesis. That might be what you mean.
Not quite, prior to this. We could say, for example, that having a scratchy throat is evidence that one has a cold. Bayes allows us to formalize this claim somehow. But it does not actually tell us what a “scratchy throat”, or indeed a “cold”, is. A perfect reasoner does not need this—they need the possible laws of physics, a sensible prior, and very good computational skills, and concept formation is not relevant to them. But a bounded Bayesian does not have this luxury—we cannot actually draw the boundary in concept-space around these terms, we certainly cannot quantify how fuzzy the boundary is, and yet we find ourselves able to do mostly-sensible probability updates anyway, because prior to a Bayesian approach, we are somehow good at concept-grouping.
Incidentally, this is why I say metaphysics is harder for a Bayesian—a perfect reasoner, or a bounded inferrentialist, does not require that their concept formation is perfect. Making metaphysical categories is helpful but optional for them. But a bounded Bayesian needs to do it and do it well, and it’s not clear how this is possible—you do not get it from priors and you do not get it from the update rule, and indeed these two things alone are not sufficient materials for a bounded Bayesian update.
If that is so, you no longer have an argument that abduction is merely a form of induction, because you have admitted to two sources of probability other than induction.
The underlying issue is that what we are trying to do with abduction is find the hidden mechanism behind the directly observable, the force of gravity that makes the apple fall. Since induction is limited to inferring futures observations from past ones, it is limited to the observable and silent about behind-the-scenes mechanisms. And so it is limited compared to abduction, and so abduction is not a form of induction. ( The classic argument against induction is that it is not a form of deduction...within classical loguic. It could still be a form of probabilistic reasoning).
Bayes allow you to confirm hypotheses that would generate the observable evidence, but doesn’t mechanically generate them for you , and also.doesn’t allow you to distinguish equally predictive ones. You can solve the first problem by creatively positing hypotheses, and the second with the criteria of simplicity and consistency. That gives you full abductive reasoning . Bayes is a subset of full abductive reasoning.
Science uses abductive reasoning , plus (dis)confirmation. Deduction is needed for these, because you need to deduce the expected consequences of a theory to observe them.
I would need to know where you are getting your likelihoods from. Are they hard observational data, or subjective priors?
Simplicity is a relative measure, not an absolute. QFT can both a complex theory, and the simplest that does the job.
@Viliamand
Yes, we need to know where the likelihood is coming from.
@Richard_Kennaway
Probabilistic reasoning is also induction, so long as induction isn’t required to be certain: observing multiple instances of something in the past raises the probability that it occur again in the future.
@AnthonyCand
Not really. they have standard definitons you can just look up. You don’t have to guess.
@speck1447
Well, Bayesian updates are no good unless you have the right hypothesis. That might be what you mean.
It’s naive to assume that explanations are suggested by the data. They are conjectured.
I appreciate that point, yes, and I have looked up standard definitions. I’m probably not looking in the right places, though, because they ones I have found are either too vague and imprecise for me to make sense of, or focus on generating hypotheses/explanations/models. If you do have a good source for a better explanation, I’d actually really like to learn more.
My argument was never that abduction is a subset of induction, but that it can always be replaced by a combination of deduction and induction.
The effect of simplicity and consistency on probabilities can both be classified as deductions:
the former as a correct application of probabilistic logic (every additional assumption reduces the probability of the conclusion being true due to probability product)
the latter as a correct application of Bayes’ theorem or other types of logic (when [belief A] implies [not belief B])
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence. So it both improves your existing models and makes you choose better models (aka new “mechanisms” as you call them).
(This is a Machine Learning friendly way to see induction; more generally, you could consider any model-parameter combination as a separate model.)
I also don’t think this is quite correct. Simplicity and consistency should be considered evidence in your application of Bayes’ theorem. Namely, Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs. (Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction. Whether you think Bayes Theorem itself fits under induction or deduction, I don’t think most people would consider it abduction).
Besides this, if I understand correctly, you are proposing that the core of abduction is about generating hypotheses rather than evaluating the evidence for or against them. This I find intriguing.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience.
I do wonder whether this still happens subconsciously or whether hypothesis search constitutes in some way its own form of reasoning. But I’m afraid I haven’t thought enough about this, so I won’t be able to argue about it.
Thank you for the inspiration though, quite useful.
Is it? That isn’t the classic definition. The classic definition is fairly limited, more like:-
Ie, just more of the same , not an infinite variety of models
It does sounds like Bayes,...but Bayes could be a superset of induction.
And where are you getting your models from? If you are creating them , that’s abduction, even if you are calling it induction. If they they are already there, in some oracular database, that’s uncomputable ideal reasoning.
Or it’s something in ML that has been mislabelled “induction”, like “hallucination”
Complete what? It isn’t complete epistemology as I have shown.
What does “associated with” mean? If inferential stats is a superset of induction it’s pretty unsurprising that it could contain abduction. If your models are being created on the fly, it actually does.
Indeed. If ideal Bayesians don’t need abduction, that doesn’t mean humans don’t.
@AnthonyC
Why is that a bad thing?
Yes. Hypothesis generation isn’t mechanical or algorthmic. That may be “bad”, but there’s not much alternative—you can’t actually use Solomonoff Induction, or whatever.
I don’t think I implied it was a bad thing? I certainly didn’t intend to imply that.
Not quite, prior to this. We could say, for example, that having a scratchy throat is evidence that one has a cold. Bayes allows us to formalize this claim somehow. But it does not actually tell us what a “scratchy throat”, or indeed a “cold”, is. A perfect reasoner does not need this—they need the possible laws of physics, a sensible prior, and very good computational skills, and concept formation is not relevant to them. But a bounded Bayesian does not have this luxury—we cannot actually draw the boundary in concept-space around these terms, we certainly cannot quantify how fuzzy the boundary is, and yet we find ourselves able to do mostly-sensible probability updates anyway, because prior to a Bayesian approach, we are somehow good at concept-grouping.
Incidentally, this is why I say metaphysics is harder for a Bayesian—a perfect reasoner, or a bounded inferrentialist, does not require that their concept formation is perfect. Making metaphysical categories is helpful but optional for them. But a bounded Bayesian needs to do it and do it well, and it’s not clear how this is possible—you do not get it from priors and you do not get it from the update rule, and indeed these two things alone are not sufficient materials for a bounded Bayesian update.