My argument was never that abduction is a subset of induction, but that it can always be replaced by a combination of deduction and induction.
The effect of simplicity and consistency on probabilities can both be classified as deductions:
the former as a correct application of probabilistic logic (every additional assumption reduces the probability of the conclusion being true due to probability product)
the latter as a correct application of Bayes’ theorem or other types of logic (when [belief A] implies [not belief B])
The underlying issue is that what we are trying to do with abduction is find the hidden mechanism behind the directly observable, the force of gravity that makes the apple fall. Since induction is limited to inferring futures observations from past ones, it is limited to the observable and silent about behind-the-scenes mechanisms. And so it is limited compared to abduction, and so abduction is not a form of induction.
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence. So it both improves your existing models and makes you choose better models (aka new “mechanisms” as you call them).
(This is a Machine Learning friendly way to see induction; more generally, you could consider any model-parameter combination as a separate model.)
Bayes allow you to confirm hypotheses that would generate the observable evidence, but doesn’t mechanically generate them for you , and also.doesn’t allow you to distinguish equally predictive ones. You can solve the first problem by creatively positing hypotheses, and the second with the criteria of simplicity of and consistency. That gives you full abductive reasoning . Bayes is a subset of full abductive reasoning.
I also don’t think this is quite correct. Simplicity and consistency should be considered evidence in your application of Bayes’ theorem. Namely, Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs. (Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction. Whether you think Bayes Theorem itself fits under induction or deduction, I don’t think most people would consider it abduction).
Besides this, if I understand correctly, you are proposing that the core of abduction is about generating hypotheses rather than evaluating the evidence for or against them. This I find intriguing.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience.
I do wonder whether this still happens subconsciously or whether hypothesis search constitutes in some way its own form of reasoning. But I’m afraid I haven’t thought enough about this, so I won’t be able to argue about it.
Thank you for the inspiration though, quite useful.
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence
Is it? That isn’t the classic definition. The classic definition is fairly limited, more like:-
Example: “For the past 7 days it has been raining. Therefore, tomorrow it will probably also rain.”
Ie, just more of the same , not an infinite variety of models
It does sounds like Bayes,...but Bayes could be a superset of induction.
And where are you getting your models from? If you are creating them , that’s abduction, even if you are calling it induction. If they they are already there, in some oracular database, that’s uncomputable ideal reasoning.
This is a Machine Learning friendly way to see induction
Or it’s something in ML that has been mislabelled “induction”, like “hallucination”
Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs
Complete what? It isn’t complete epistemology as I have shown.
(Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction
What does “associated with” mean? If inferential stats is a superset of induction it’s pretty unsurprising that it could contain abduction. If your models are being created on the fly, it actually does.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience
Indeed. If ideal Bayesians don’t need abduction, that doesn’t mean humans don’t.
focus on generating hypotheses/explanations/models
Why is that a bad thing?
That implies different permissible levels of making and breaking assumptions, choosing and changing models. It’s more fluid, less rule-bound, more willing to accept being knowingly wrong in some ways, less tied to formalisms and precise methods.
Yes. Hypothesis generation isn’t mechanical or algorthmic. That may be “bad”, but there’s not much alternative—you can’t actually use Solomonoff Induction, or whatever.
My argument was never that abduction is a subset of induction, but that it can always be replaced by a combination of deduction and induction.
The effect of simplicity and consistency on probabilities can both be classified as deductions:
the former as a correct application of probabilistic logic (every additional assumption reduces the probability of the conclusion being true due to probability product)
the latter as a correct application of Bayes’ theorem or other types of logic (when [belief A] implies [not belief B])
I don’t think this is generally correct. Induction is about moving probability mass to both parameters and models that best explain the evidence. So it both improves your existing models and makes you choose better models (aka new “mechanisms” as you call them).
(This is a Machine Learning friendly way to see induction; more generally, you could consider any model-parameter combination as a separate model.)
I also don’t think this is quite correct. Simplicity and consistency should be considered evidence in your application of Bayes’ theorem. Namely, Bayes is complete: there is no other theorem or formula required to achieve the most accurate estimate of probability for beliefs. (Also, Bayesian statistics is considered a subset of inferential statistics, which is the formal mathematics associated with induction. Whether you think Bayes Theorem itself fits under induction or deduction, I don’t think most people would consider it abduction).
Besides this, if I understand correctly, you are proposing that the core of abduction is about generating hypotheses rather than evaluating the evidence for or against them. This I find intriguing.
I was originally considering the standpoint of an “optimal Bayesian” who simultaneously evaluates all hypotheses at once by shifting probability mass, but this is far from the human experience.
I do wonder whether this still happens subconsciously or whether hypothesis search constitutes in some way its own form of reasoning. But I’m afraid I haven’t thought enough about this, so I won’t be able to argue about it.
Thank you for the inspiration though, quite useful.
Is it? That isn’t the classic definition. The classic definition is fairly limited, more like:-
Ie, just more of the same , not an infinite variety of models
It does sounds like Bayes,...but Bayes could be a superset of induction.
And where are you getting your models from? If you are creating them , that’s abduction, even if you are calling it induction. If they they are already there, in some oracular database, that’s uncomputable ideal reasoning.
Or it’s something in ML that has been mislabelled “induction”, like “hallucination”
Complete what? It isn’t complete epistemology as I have shown.
What does “associated with” mean? If inferential stats is a superset of induction it’s pretty unsurprising that it could contain abduction. If your models are being created on the fly, it actually does.
Indeed. If ideal Bayesians don’t need abduction, that doesn’t mean humans don’t.
@AnthonyC
Why is that a bad thing?
Yes. Hypothesis generation isn’t mechanical or algorthmic. That may be “bad”, but there’s not much alternative—you can’t actually use Solomonoff Induction, or whatever.
I don’t think I implied it was a bad thing? I certainly didn’t intend to imply that.