So, to know that a criticism is decisive, you have to know that no one could possibly come up with a counter criticism.
I think you didn’t take into account the definition that I used: “A decisive argument (or group of arguments) contradicts the negation of its conclusion, so both can’t be true.” Excluding the possibility of counter criticism is unnecessary for this definition to be met. The point is that if A and B could both be true – if they’re compatible – then it’s problematic to view B as a criticism of A.
Neither form of perfection is available.
The goal is basically logical relevance, not perfection.
You are right that induction is dumb, but it still sometimes works..especially if taken probabilistically.
For induction to work, it’d have to define steps a person can follow to induce a theory. It’d have to specify what constitutes inducing a theory. The main issue with induction isn’t the quality of the results, but actually defining a specific method that produces any results. Over the years, I’ve never been able to get an answer to this along with a worked example and answers to basic questions like which of the infinitely many patterns fitting the data should be induced and which shouldn’t and why those.
Weighting is needed to see which is false.
When two things contradict and you’re deciding what side to take, weighting them and choosing the higher weighted side is one approach. But it’s certainly not the only approach. Since you’re just choosing between two things, quantitative evaluation seems less relevant or appealing than in many other scenarios.
I could go into more detail here and it’s an interesting topic but I think I’ve written enough for an initial reply so I’ll leave it at saying I don’t see what aspect of contradiction-resolution makes quantitative approaches mandatory. My best guess is you think they’re always mandatory for everything, which might be better approached from another angle, not via this sub-problem.
Weighting isn’t adding apples and oranges , it’s adding value_of(n apples) and value_of(m oranges). Everything gets converted to the same type first.
My link discusses dimension conversion (like from apples to value) being problematic. That’s covered.
All our arguments are fallible
Then none are decisive!
Do you think fallibilism prohibits reaching conclusions? Decisive basically means conclusive, aka adequate to tentatively, fallibly reach a conclusion, as against arguments that don’t provide that much (where accepting the truth of the argument, as a premise, would still be inadequate to reach a conclusion).
Well .. it’s not as simple as naive CR makes out. A single observation can be erroneous (eg Martian canals, cold fusion).
Popper knew that and wrote about it.
Indecisive arguments dont have to be logically flawed...they can be reframed as valid probabilistic arguments.
Do you have an example? If it’s actually valid, I might tell you it’s decisive. As above, decisive is an easier standard than you interpreted it as. I’m not sure what sort of probabilistic argument you have in mind though.
I think you didn’t take into account the definition that I used: “A decisive argument (or group of arguments) contradicts the negation of its conclusion, so both can’t be true.” Excluding the possibility of counter criticism is unnecessary for this definition to be met. The point is that if A and B could both be true – if they’re compatible – then it’s problematic to view B as a criticism of A.
I don’t see how that connects to the ordinary meaning of “decisive”.
In fact passages like this
Decisive positive arguments are either rare or entirely inaccessible. Pointing out 1000 good things isn’t enough to prove an idea will succeed at its purpose
Make it sound like decisiveness, is the same as certainty. But if a “decisive” argument is fallible, and can be overridden, that is treating the overriding argument as having more weught.
The goal is basically logical relevance, not perfection.
Is this all about Hempel’s paradox?
Nothing about seeing a lot of black ravens actually means that there couldn’t be a white one.
Something about seeing a black raven means the next raven you see is slightly more likely to be black. Yet , you reject that reasoning. Yet, it’s relevant enough.
Something about seeing a white raven means that ravens aren’t all black …but not with certainty ..? But decisiveness isn’t certainty?
For induction to work, it’d have to define steps a person can follow to induce a theory. It’d have to specify what constitutes inducing a theory.
Minimally, induction induces patterns , not theories , and the simplest pattern is that events that have been observed multiple times in the past are likely to occur again in the future.
As you yourself said:-
The basic concept of induction is to find patterns in data and learn from them.
The simplest pattern is “what will.happen before will happen again”. Simple organisms can implement that...ve”
The main issue with induction isn’t the quality of the results, but actually defining a specific method that produces any results.
Over the years, I’ve never been able to get an answer to this along with a worked example and answers to basic questions like which of the infinitely many patterns fitting the data should be induced and which shouldn’t and why those.
Obviously, we should start with the simplest. And we have to, if we are building an induction machine. We don’t have to find a singular, perfect rule, if we are just trying to make good-enough probabilistic predictions. Even the Turkey is right 364⁄365 times..
The important thing is not to expect the probabilistic prediction to amount to certainty, and not to expect prediction to amount to explanation. Within those limits, induction, as probabilistic prediction, works.
David Deutsch says induction is all about generating theories or knowledge or something , but you don’t have to take that at face value. There’s a simpler way of thinking about induction that is much more defensible.
When two things contradict and you’re deciding what side to take, weighting them and choosing the higher weighted side is one approach. But it’s certainly not the only approach.
You need to argue that it cannot work.”CF’s main motivation is logical arguments showing that various other approaches cannot possibly work”.
Since you’re just choosing between two things, quantitative evaluation seems less relevant or appealing than in many other scenarios.
I don’t see the relevance.
I could go into more detail here and it’s an interesting topic but I think I’ve written enough for an initial reply so I’ll leave it at saying I don’t see what aspect of contradiction-resolution makes quantitative approaches mandatory.
I don’t see the alternative. If arguments aren’t infallible, you would need to count and weight them.
My link discusses dimension conversion (like from apples to value) being problematic.
Problematic is far short of “couldn’t possibly work”.
Do you think fallibilism prohibits reaching conclusions?
You could lower the bar so that you draw conclusions at some likelihood lesss than 100% …but a lot of the things you object to could pass that bar too.
Decisive basically means conclusive, aka adequate to tentatively, fallibly reach a conclusion,
In the absence of certainty, you can reach (tentative) conclusions by weighing evidence and arguments , and going with the strongest. I can’t see how you can do that without weighing.
as against arguments that don’t provide that much (where accepting the truth of the argument, as a premise, would still be inadequate to reach a conclusion).
It’s obvious that C.F. Is better than arguments that are irrelevant. Its not obviou s it’s better than weighting and induction.
Decisive: I think this is the best issue to resolve first and I’m hopeful we’ll be able to succeed here.
The ordinary meaning of “decisive” is “settling an issue; producing a definite result”. I don’t see where it says infallibly, permanently, without the possibly of later revision, or anything like that. We can reach a definite result (a conclusion) based on our currently available evidence and ideas.
People often talk about strong and weak arguments. All weak or moderate arguments, and many strong arguments, are indecisive. When shopping for a house, you might note nice kitchen countertops (indecisive, weak argument), a pool (indecisive, strong argument), painted a pretty color (indecisive, weak argument), large yard (indecisive, moderate argument), and many more things. Or you might figure out your goal specifically enough to enable a decisive argument like “I want a commute under 15 minutes and 4+ bedrooms; this house has 3 bedrooms so I won’t buy it”. Both styles of argument are fallible. But they do have a clear, significant difference. I think “decisive” is a good fit for this difference: 3 bedrooms being too few settles the issue and produces a definite result, whereas the large yard didn’t. Logically, on the assumptions or premises that the house has 3 bedrooms and the goal is 4+, we can reach a conclusion. But if we know it has a large yard and our goal is a good house, we cannot reach a conclusion: that’s compatible with picking or not picking this house.
Nothing about this is infallible. I could have misunderstood logic, or counting, or my goal, or what a house is, or all sorts of other things. While any of my conclusions are open to potential revision, it’s also realistic that they aren’t revised anytime soon, so despite fallibilism there is a significant difference between issues where I reached a conclusion and issues where I didn’t.
Also, are you familiar with Elimination by Aspects (EBA) or Satisficing? They have similarities/overlap with CF which could help clarify this part.
If you’re familiar with MCDM/MCDA literature, that could help too. There’s a concept of compensatory and non-compensatory approaches. Compensatory approaches mean that a weak score on some factors can be compensated for by a strong score on other factors. Compensatory approaches use factors indecisively, while non-compensatory approaches use factors decisively. In EBA, if a theory fails at one of the criteria then it’s eliminated with no way to un-eliminate it within the current decision making process (you have to go outside the process and invoke fallibility, new information, etc., to revise the conclusion).
Hempel’s Paradox: Relevant. Part of the issue.
Asymmetry: When you see a white raven, that doesn’t provide certainty. You could have misidentified the bird species. But on the premise that you saw a white raven, then logic enables you to conclude that “all ravens are black” is false. Asymmetrically, on the premise that you really did see a black raven, or a million of them, you cannot conclude that “all ravens are black” is true. With some arguments, if you assume your premises and background knowledge are true, then logic dictates a conclusion, while with other arguments even if your premises and background knowledge are correct that still wouldn’t be enough to reach the conclusion. Some arguments are decisive (settle issues, produce definite results) when assuming their premises and your background knowledge, while others still aren’t. This difference is compatible with fallibility (your premises and background knowledge could be doubted, revised, etc.).
Simplest pattern:
The simplest pattern is “what will.happen before will happen again”. Simple organisms can implement that...ve”
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other. Do you disagree? If you agree, then this simple pattern idea doesn’t guide which patterns to induce/use, right? So I don’t see how this claim helps. Examples: https://xkcd.com/1122/
Rule induction: Do any of these claim to offer a general purpose thinking method (including capable of doing philosophy debates, like we are now) which solves the which pattern(s) problem?
Cannot work for induction:patterns are likely to continue in the future approaches cannot possibly work in the context of infinitely many patterns that don’t continue and no viable solution for choosing between patterns.
Cannot work for weighted factors: Dimension conversion to generic goodness only works approximately and only in special cases. Other dimension conversions are also special cases, though some aren’t approximate (like E=mc^2). Relying on dimension conversion cannot possibly work for a general purpose thinking system because it’s not generally available. Also, the concept of factor weights relies on the importance of the factor being approximately the same for different values of the factor, which is often false (both due to failure breakpoints and due to diminishing marginal utility).
Certainty: I’ve been trying to discuss fallibilist versions of CF, weighted factors and induction. Critiquing infallibilists wasn’t my focus. One of my last discussions with David Deutsch was actually about this, back in ~2013. From memory, he basically claimed that all justificationists (advocates of any kind of positive/supporting arguments) are infallibilists, which I denied. I brought up LessWrong people in general as an example, since they tend to be non-Popperian fallibilists. He claimed that they’re only fallibilists by contradicting themselves, which doesn’t really count or help. I was unable to find out from him what the alleged contradiction is between 1) fallibilism 2) positive/supporting/justifying arguments.
Duhem-Quine:
I wasn’t accusing Pooper of naive Popperism.
ok great. I don’t know who you were accusing, but generally speaking there are plenty of Popperians who I’m unimpressed by, so we might agree, idk.
. When shopping for a house, you might note nice kitchen countertops (indecisive, weak argument), a pool (indecisive, strong argument), painted a pretty color (indecisive, weak argument), large yard (indecisive, moderate argument), and many more things. Or you might figure out your goal specifically enough to enable a decisive argument like “I want a commute under 15 minutes and 4+ bedrooms; this house has 3 bedrooms so I won’t buy it”.
If I find two houses with four bedrooms and a fifteen minute commute, I can decide beteeen them using indecisive, nice-to-have features like a swimming pool as a further criterion.
I’m not forbidden from using decisive criteria, if that’s what they are. CRs and CFs are self-forbidden from using various things , though.
Decisive + indecisive criteria is better than decisive alone, because it enables.more fine grained decision making.
Both styles of argument are fallible. But they do have a clear, significant difference. I think “decisive” is a good fit for this difference: 3 bedrooms being too few settles the issue and produces a definite result, whereas the large yard didn’t. Logically, on the assumptions or premises that the house has 3 bedrooms and the goal is 4+, we can reach a conclusion. But if we know it has a large yard and our goal is a good house, we cannot reach a conclusion: that’s compatible with picking or not picking this house.
Nothing about this is infallible.
Then decisiveness.isn’t an objective criterion …it’s a question of setting up a threshhold, saying that 80% or 90% or 99% likelihood counts as decisiveness. Decisiveness is disguised weighting, if it isn’t infallibility.
Asymmetry
When you see a white raven, that doesn’t provide certainty. You could have misidentified the bird species. But on the premise that you saw a white raven, then logic enables you to conclude that “all ravens are black” is false. Asymmetrically, on the premise that you really did see a black raven, or a million of them, you cannot conclude that “all ravens are black” is true.
You cannot conclude it is certain, but you can conclude it is likely and.calculate a likeliihood.
Induction
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other.
Yes. But I can still choose the simplest that fits the data I currently have
, Ie. I can do induction in a good-enough way.
Do you disagree? If you agree, then this simple pattern idea doesn’t guide which patterns to induce/use, right?
I do not agree, it does, that’s the whole point. You start with the simplest, and move in to the next simplest, and so on.
We know that machines can induce, in a good-enough way, so there must be an algorithm for it.
Try it yourself. … imagine you are playing a game where you have to guess the next letter in a sequence. If the sequence starts “aaa...” You would naturally guess ” a” ”. Anyone would, because everyone can do basic induction. If it turned out the next letter was “b” you could guess that he pattern is
“aaabaaabaaab..”
or
“aaabbbaaabbb..”
or
“aaabbbbccccddd...”
And maybe some other possibilities. Notice that you are not certain which pattern is the right one. Notice also that you are not at a loss to come up with simple candidate patterns … the infinity of possible patterns isn’t impeding you. Notice also that you can still make a probablistic prediction, eg. 2⁄3 probability that the fifth letter will be a “b”.
″ But surely there are more than three candidate patterns! ” There are more complex patterns that fit, but they get low weighting because they are complex.
“But that’s Conjecture and Refutation!” Maybe it is! If you want to say induction cannot possibly work , and maintain that C&R does work, you need to show that induction isn’t a form of C&R. (And also that it’s failing at something that is actually claimed for it by inductionists).
Rule induction: Do any of these claim to offer a general purpose thinking method (including capable of doing philosophy debates, like we are now) which solves the which pattern(s) problem?
The only valid objections to induction are that it doesn’t achieve some.kind of perfection , such as complete certainty, or complete generality
That was an objection from generality. Its irrelevant , because I only claimed that induction was capable of working predictively and probabilistically. That indeed does not work for certain high bar problems, but that should not be summarised as “cannot work at all”.
BTW, we dont know that what we are doing now is fully general. Maybe there are things human beings just can’t think of.
Cannot work for induction: patterns are likely to continue in the future approaches cannot possibly work in the context of infinitely many patterns that don’t continue and no viable solution for choosing between patterns.
There is a way of choosing between patterns. It’s simplicity as I said. It can be shown to work,… so long as you are only aiming for probabilistic prediction. There’s an argument t that induction can’t tell you the exact laws of nature, with certainty, given a limited data set, but that much more ambitious than what I am talking about.
Weighting
Cannot work for weighted factors: Dimension conversion to generic goodness only works approximately and only in special cases.
If you are only trying to satisfy your only values, then the weighting is just how much you value things in relation to each other. Presumably, your objection is that the lack of objective criteria ..but if you are making a personal decision, why would that matter.
Justification
Critiquing infallibilists wasn’t my focus. One of my last discussions with David Deutsch was actually about this, back in ~2013. From memory, he basically claimed that all justificationists (advocates of any kind of positive/supporting arguments) are infallibilists, which I denied. I brought up LessWrong people in general as an example, since they tend to be non-Popperian fallibilists. He claimed that they’re only fallibilists by contradicting themselves, which doesn’t really count or help. I was unable to find out from him what the alleged contradiction is between 1) fallibilism 2) positive/supporting/justifying arguments.
Yes, Deutsch is frustrating. He tends to state things without justification. That’s consistent with his rejection of justifcationism , but at the same time you generally need more than “my idea is by contradicted by anything” to motivate you change your mind. Which is an argument for justificationism from argumentative reasoning.
Then decisiveness.isn’t an objective criterion …it’s a question of setting up a threshhold, saying that 80% or 90% or 99% likelihood counts as decisiveness. Decisiveness is disguised weighting, if it isn’t infallibility.
Per my article, decisiveness, like other idea evaluation, depends on the goal and context. “It costs $100” is decisive criticism for a $20 budget goal but not a $200 budget goal.
But this doesn’t use likelihoods or weights. It uses qualitative differences or breakpoints for quantities (which are the points where there difference in quantity makes a qualitative difference). The generic breakpoint is “good enough for success at my goal or not?”
Decisive + indecisive criteria is better than decisive alone, because it enables.more fine grained decision making.
You can do fine-grained decision making, without limitation, using decisive reasoning alone. And convenience comparisons or marginal benefits are irrelevant given my claim (which is currently an open issue under discussion) that indecisive reasoning doesn’t work at all.
If you are only trying to satisfy your only values, then the weighting is just how much you value things in relation to each other. Presumably, your objection is that the lack of objective criteria ..but if you are making a personal decision, why would that matter.
Epistemology should be general purpose and cover impersonal issues like scientific controversies, and allow for productive debate rather than being subjective or arbitrary.
By no objective criteria do you mean people can and should just subjectively/intuitively make up the numbers with no math? If so, how can they do that? How would they or their intuition determine what numbers roughly feel right? By using intelligence via some other full general-purpose epistemology which has been used as a premise/prerequisite of this approach? My understanding is that for this kind of weighted factor math stuff to be a first epistemology – a first solution to how people think intelligently, as I believe its claimed to be – then the math has to work objectively and you can’t just rely on people somehow intelligently coming up with numbers that are in the right ballpark. If you rely on intelligence then it’s only a secondary method which leaves all the primary questions in epistemology open.
Also if the numbers are being made up non-objectively so they feel about right, why not just make up a conclusion that feels about right directly? What good is the intermediate step of making up the numbers?
“But that’s Conjecture and Refutation!” Maybe it is! If you want to say induction cannot possibly work , and maintain that C&R does work, you need to show that induction isn’t a form of C&R. (And also that it’s failing at something that is actually claimed for it by inductionists).
There are many different versions of induction. If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other.
Yes. But I can still choose the simplest that fits the data I currently have , Ie. I can do induction in a good-enough way.
Which patterns are simplest? What’s the rule to judge that? Does applying the rule require intelligence as a prerequisite?
Per my article, decisiveness, like other idea evaluation, depends on the goal and context. “It costs $100” is decisive criticism for a $20 budget goal but not a $200 budget goal.
But you can’t expect any given context to supply you with a set of decisive criteria that narrow your options to one.
But this doesn’t use likelihoods or weights.
It uses an arbitrary threshold of decisiveness.
It uses qualitative differences or breakpoints for quantities (which are the points where there difference in quantity makes a qualitative difference). The generic breakpoint is “good enough for success at my goal or not?”
The examples you have given look.qualitative.
Decisive + indecisive criteria is better than decisive alone, because it enables.more fine grained decision making.
You can do fine-grained decision making, without limitation, using decisive reasoning alone.
I don’t see how.
And convenience comparisons or marginal benefits are irrelevant given my claim (which is currently an open issue under discussion) that indecisive reasoning doesn’t work at all.
If you are only trying to satisfy your only values, then the weighting is just how much you value things in relation to each other. Presumably, your objection is that the lack of objective criteria ..but if you are making a personal decision, why would that matter.
Epistemology should be general purpose and cover impersonal issues like scientific controversies, and allow for productive debate rather than being subjective or arbitrary.
Epistemology quite possibly can’t be general purpose, in the sense that the same techniques apply to different kinds of problem.
By no objective criteria do you mean people can and should just subjectively/intuitively make up the numbers with no math?
I mean with subjective criteria.
If so, how can they do that? How would they or their intuition determine what numbers roughly feel right?
They can do that. Asking how they do it doesn’t mean it’s impossible.
By using intelligence via some other full general-purpose epistemology which has been used as a premise/prerequisite of this approach? My understanding is that for this kind of weighted factor math stuff to be a first epistemology – a first solution to how people think intelligently, as I believe its claimed to be – then the math has to work objectively and you can’t just rely on people somehow intelligently coming up with numbers that are in the right ballpark. If you rely on intelligence then it’s only a secondary method which leaves all the primary questions in epistemology open.
Different problems require different approaches. I’m not saying subjective weighting is the answer to everything
Also if the numbers are being made up non-objectively
Non objective and made-up are not the same thing.
so they feel about right, why not just make up a conclusion that feels about right directly?
People do. I am not saying there is one method to rule them all.
What good is the intermediate step of making up the numbers?
If a thing is worth doing , it is worth doing with made up numbers.
There are many different versions of induction.
Which is why it is difficult to show none of them could possibly work.
If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
I have picked probabilistic prediction, which can be shown to work directly, without needing a theoretical justification.
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other.
Yes. But I can still choose the simplest that fits the data I currently have , Ie. I can do induction in a good-enough way.
Which patterns are simplest?
You know the “aaaaa” pattern is simpler than the others. Its no great mystery.
What’s the rule to judge that?
People here like Kolmogorov complexity. That isn’t some unanswerable question.
Does applying the rule require intelligence as a prerequisite?
You don’t need much intelligence to do simple induction, since simple organisms can do it.
But you can’t expect any given context to supply you with a set of decisive criteria that narrow your options to one.
Most goals have many solutions which we should be ~indifferent between – they all work and it’s not worth our time to optimize more.
In the cases where optimization is worthwhile and there are multiple solutions, we can narrow it down further by considering more ambitous goals.
As a simple approximation, looking only at viable solutions you want to optimize between, you may maximize one factor. Maximizing a single factor doesn’t require combining factors, dimension conversion, rank ordering or weighting, and keeps the method non-compensatory (a problem with one factor can’t be outweighed by some other factors being good). The problems with non-linear value functions are often quite manageable when dealing with only one non-binary factor. If you model decision making as multiplying many binary factors, you can also multiply in one non-binary factor without the problems that come from multiple non-binary factors. This gives you a simple answer which I don’t consider ideal but it’s mostly OK and doesn’t require reading essays to get a more complicated answer.
It uses an arbitrary threshold of decisiveness.
Budgets, or more generally goals, aren’t arbitrary and have breakpoints/thresholds inherent in them, which we should look for. The most generic threshold is “enough (or a low enough amount for negative factors) for goal success”.
If so, how can they do that? How would they or their intuition determine what numbers roughly feel right?
They can do that. Asking how they do it doesn’t mean it’s impossible.
My claim is it can’t be done other than via conjectures and refutations, CF, the stuff I’m advocating. I’m claiming that other methods don’t work. If people do it but you don’t know how, that is compatible with my claim, since they may be using the things I’m saying do work. This isn’t counter-evidence against me.
There are many different versions of induction.
Which is why it is difficult to show none of them could possibly work.
They have common themes, so it can be done using abstract arguments as long as people agree in broad strokes on what sorts of things are and aren’t induction. If you start loosening up the definition of “induction” to include C&R, that’s way too broad, and it’s no longer the same thing that Popper or I said doesn’t work, and it no longer fits the historical tradition/meaning of induction (unless we’re missing something, which you’d have to show).
If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
I have picked probabilistic prediction, which can be shown to work directly, without needing a theoretical justification.
My primary concern with literature isn’t the justification but just the specification of how it works. You haven’t provided a well-defined non-moving target for my criticism, as both CR and CF provide to you. Usually, even when highly abstract discussion is pretty effective (as is needed to cover induction generically), it’s still best to go over at least one more specific example, so if you could specify one version of induction in detail (preferably via cite) we could use it as an example.
You know the “aaaaa” pattern is simpler than the others. Its no great mystery.
I have an answer in that easy case that I believe I got via C&R. If you don’t give the math, then you aren’t showing that some non-C&R method can evaluate simplicity. And just because I have an answer in a few easy cases doesn’t mean that you or I have a good answer in harder cases.
People here like Kolmogorov complexity. That isn’t some unanswerable question.
Kolmogorov complexity is uncomputable and machine-dependent, right? So it’s not a usable approach. That people like it anyway is evidence about how hard the question is and how poor the known answers are.
You don’t need much intelligence to do simple induction, since simple organisms can do it.
I deny that humans can do induction. I also deny that simple organisms can do it. I doubt this is a good sub-topic to go into right now.
Most goals have many solutions which we should be ~indifferent between – they all work and it’s not worth our time to optimize more.
Many don’t: they have solutions that deliver worthwhile but but critical amounts of utility. So the one size fits all approach isnt going to work for them.
My claim is it can’t be done other than via conjectures and refutations
Your claim was that it could not possibly work at all.
Anyway: simple induction can be implemented by simple organisms and programmes. They are too simple to be deliberately making conjectures, but capable of running a hardwired algorithm that just expects the same result from the same cause.
You haven’t provided a well-defined non-moving target for my criticism, as both CR and CF provide to you.
Yes I have: better than chance prediction.
You know the “aaaaa” pattern is simpler than the others. Its no great mystery.
I have an answer in that easy case that I believe I got via C&R.
That doesn’t mean C&R is the only possible mechanism.
People here like Kolmogorov complexity. That isn’t some unanswerable question.
Kolmogorov complexity is uncomputable and machine-dependent, right
I’m not a fan myself, but it’s not like no one has any clue about how simplicity works.
I deny that humans can do induction. I also deny that simple organisms can do it.
Deny what it like, there’s evidence they do it
Chatgpt: Induction, in a broad sense, means learning a general rule or expectation from repeated experience rather than from a single fixed instinct. Many animal behaviours fit this pattern, even if they’re simpler than human reasoning. Here are some clear examples:
Trial-and-error learning (generalising from outcomes)
Rats in mazes learn that certain turns or paths tend to lead to food. Over time, they don’t just remember one route—they form a general expectation like “this direction usually pays off.”
This kind of behaviour was famously studied by Edward Thorndike, who showed animals gradually “induce” successful strategies.
Conditioning (predictive associations)
In classical conditioning experiments by Ivan Pavlov, dogs learned that a bell predicts food. They generalise from repeated pairings to a rule: “bell → food is coming.”
This is inductive because the animal infers a predictive relationship from repeated experience.
Foraging decisions (learning patterns in the environment)
Bees learn which flower colours or shapes tend to contain nectar. They don’t test every flower randomly forever—they generalise: “purple flowers here are usually rewarding.”
This shows induction from multiple encounters to a probabilistic rule.
Predator avoidance (learning danger cues)
Birds that survive encounters with predators often learn to recognise certain shapes or movements (e.g., hawk silhouettes) as dangerous.
They generalise from specific experiences to a broader category: “things like this are threats.”
Habituation and sensitisation (learning what matters)
Animals stop responding to repeated harmless stimuli (habituation), effectively “learning” that a stimulus predicts nothing important.
Conversely, sensitisation increases response after significant events. Both involve extracting regularities from experience.
Tool use and problem solving (higher-level induction)
Some primates and corvids (like crows) learn rules about tools—for example, that sticks can retrieve food from holes.
Over time, they apply this rule in new contexts, suggesting a more flexible, inductive generalisation.
A useful distinction:
Not all learned behaviour is equally “inductive.” Simple conditioning might just be association, while more complex behaviours (like those in crows or apes) come closer to forming abstract rules. But in all these cases, the key feature is the same: the animal uses past experiences to form expectations about new situations.
.
My claim is it can’t be done other than via conjectures and refutations
Your claim was that it could not possibly work at all.
When I said that, I was using standard definitions that excluded C&R.
You haven’t provided a well-defined non-moving target for my criticism, as both CR and CF provide to you.
Yes I have: better than chance prediction.
“better than chance prediction” with the predictions done by what method? What is the math algorithm or flowchart? You’re still not providing specifics.
I deny that humans can do induction. I also deny that simple organisms can do it.
Deny what it like, there’s evidence they do it
This is jumping ahead. Humans do something. Whether or not its induction depends on what induction is, which is a current conversation topic.
Chatgpt: Induction, in a broad sense, means
I’m not going to debate Chatgpt, and this is unhelpful when I’ve already read many versions of induction and don’t need an introductory summary. Is there no literature you can cite that you think writes down correct details of induction? The issue isn’t my familiarity with induction, it’s you picking a specific claim. Even if you’re unsure and think one of many inductivist positions may be right, you could still pick a single one for us to discuss in more detail. I can’t pick that for you but it needs to be picked for me to give more specific criticism.
What are your intentions with this discussion? I’d be open to trying to actually work through these issues and reach a conclusion. I’d be open to mutually agreeing to put in some effort. Right now, every time I reply, I don’t know if you’re ever going to reply again. I don’t think we’re going to resolve these issues quickly but I think the topics are important and I’m interested in trying seriously.
Thanks for engaging.
I think you didn’t take into account the definition that I used: “A decisive argument (or group of arguments) contradicts the negation of its conclusion, so both can’t be true.” Excluding the possibility of counter criticism is unnecessary for this definition to be met. The point is that if A and B could both be true – if they’re compatible – then it’s problematic to view B as a criticism of A.
The goal is basically logical relevance, not perfection.
For induction to work, it’d have to define steps a person can follow to induce a theory. It’d have to specify what constitutes inducing a theory. The main issue with induction isn’t the quality of the results, but actually defining a specific method that produces any results. Over the years, I’ve never been able to get an answer to this along with a worked example and answers to basic questions like which of the infinitely many patterns fitting the data should be induced and which shouldn’t and why those.
When two things contradict and you’re deciding what side to take, weighting them and choosing the higher weighted side is one approach. But it’s certainly not the only approach. Since you’re just choosing between two things, quantitative evaluation seems less relevant or appealing than in many other scenarios.
I could go into more detail here and it’s an interesting topic but I think I’ve written enough for an initial reply so I’ll leave it at saying I don’t see what aspect of contradiction-resolution makes quantitative approaches mandatory. My best guess is you think they’re always mandatory for everything, which might be better approached from another angle, not via this sub-problem.
My link discusses dimension conversion (like from apples to value) being problematic. That’s covered.
Do you think fallibilism prohibits reaching conclusions? Decisive basically means conclusive, aka adequate to tentatively, fallibly reach a conclusion, as against arguments that don’t provide that much (where accepting the truth of the argument, as a premise, would still be inadequate to reach a conclusion).
Popper knew that and wrote about it.
Do you have an example? If it’s actually valid, I might tell you it’s decisive. As above, decisive is an easier standard than you interpreted it as. I’m not sure what sort of probabilistic argument you have in mind though.
I don’t see how that connects to the ordinary meaning of “decisive”.
In fact passages like this
Make it sound like decisiveness, is the same as certainty. But if a “decisive” argument is fallible, and can be overridden, that is treating the overriding argument as having more weught.
Is this all about Hempel’s paradox?
Something about seeing a black raven means the next raven you see is slightly more likely to be black. Yet , you reject that reasoning. Yet, it’s relevant enough.
Something about seeing a white raven means that ravens aren’t all black …but not with certainty ..? But decisiveness isn’t certainty?
Minimally, induction induces patterns , not theories , and the simplest pattern is that events that have been observed multiple times in the past are likely to occur again in the future.
As you yourself said:-
The simplest pattern is “what will.happen before will happen again”. Simple organisms can implement that...ve”
...while.machine learning can implement more complex versions. https://en.wikipedia.org/wiki/Rule_induction
Obviously, we should start with the simplest. And we have to, if we are building an induction machine. We don’t have to find a singular, perfect rule, if we are just trying to make good-enough probabilistic predictions. Even the Turkey is right 364⁄365 times..
The important thing is not to expect the probabilistic prediction to amount to certainty, and not to expect prediction to amount to explanation. Within those limits, induction, as probabilistic prediction, works.
David Deutsch says induction is all about generating theories or knowledge or something , but you don’t have to take that at face value. There’s a simpler way of thinking about induction that is much more defensible.
You need to argue that it cannot work.”CF’s main motivation is logical arguments showing that various other approaches cannot possibly work”.
I don’t see the relevance.
I don’t see the alternative. If arguments aren’t infallible, you would need to count and weight them.
Problematic is far short of “couldn’t possibly work”.
You could lower the bar so that you draw conclusions at some likelihood lesss than 100% …but a lot of the things you object to could pass that bar too.
In the absence of certainty, you can reach (tentative) conclusions by weighing evidence and arguments , and going with the strongest. I can’t see how you can do that without weighing.
It’s obvious that C.F. Is better than arguments that are irrelevant. Its not obviou s it’s better than weighting and induction.
I wasn’t accusing Pooper of naive Popperism.
Thanks for engaging again.
Decisive: I think this is the best issue to resolve first and I’m hopeful we’ll be able to succeed here.
The ordinary meaning of “decisive” is “settling an issue; producing a definite result”. I don’t see where it says infallibly, permanently, without the possibly of later revision, or anything like that. We can reach a definite result (a conclusion) based on our currently available evidence and ideas.
People often talk about strong and weak arguments. All weak or moderate arguments, and many strong arguments, are indecisive. When shopping for a house, you might note nice kitchen countertops (indecisive, weak argument), a pool (indecisive, strong argument), painted a pretty color (indecisive, weak argument), large yard (indecisive, moderate argument), and many more things. Or you might figure out your goal specifically enough to enable a decisive argument like “I want a commute under 15 minutes and 4+ bedrooms; this house has 3 bedrooms so I won’t buy it”. Both styles of argument are fallible. But they do have a clear, significant difference. I think “decisive” is a good fit for this difference: 3 bedrooms being too few settles the issue and produces a definite result, whereas the large yard didn’t. Logically, on the assumptions or premises that the house has 3 bedrooms and the goal is 4+, we can reach a conclusion. But if we know it has a large yard and our goal is a good house, we cannot reach a conclusion: that’s compatible with picking or not picking this house.
Nothing about this is infallible. I could have misunderstood logic, or counting, or my goal, or what a house is, or all sorts of other things. While any of my conclusions are open to potential revision, it’s also realistic that they aren’t revised anytime soon, so despite fallibilism there is a significant difference between issues where I reached a conclusion and issues where I didn’t.
Also, are you familiar with Elimination by Aspects (EBA) or Satisficing? They have similarities/overlap with CF which could help clarify this part.
If you’re familiar with MCDM/MCDA literature, that could help too. There’s a concept of compensatory and non-compensatory approaches. Compensatory approaches mean that a weak score on some factors can be compensated for by a strong score on other factors. Compensatory approaches use factors indecisively, while non-compensatory approaches use factors decisively. In EBA, if a theory fails at one of the criteria then it’s eliminated with no way to un-eliminate it within the current decision making process (you have to go outside the process and invoke fallibility, new information, etc., to revise the conclusion).
Hempel’s Paradox: Relevant. Part of the issue.
Asymmetry: When you see a white raven, that doesn’t provide certainty. You could have misidentified the bird species. But on the premise that you saw a white raven, then logic enables you to conclude that “all ravens are black” is false. Asymmetrically, on the premise that you really did see a black raven, or a million of them, you cannot conclude that “all ravens are black” is true. With some arguments, if you assume your premises and background knowledge are true, then logic dictates a conclusion, while with other arguments even if your premises and background knowledge are correct that still wouldn’t be enough to reach the conclusion. Some arguments are decisive (settle issues, produce definite results) when assuming their premises and your background knowledge, while others still aren’t. This difference is compatible with fallibility (your premises and background knowledge could be doubted, revised, etc.).
Simplest pattern:
There are infinitely many patterns which fit the past. Of those patterns, infinitely many will break in the near future, infinitely many will break in the distant future, and infinitely many will hold forever. Many of these different patterns fit the data perfectly and contradict each other. Do you disagree? If you agree, then this simple pattern idea doesn’t guide which patterns to induce/use, right? So I don’t see how this claim helps. Examples: https://xkcd.com/1122/
Rule induction: Do any of these claim to offer a general purpose thinking method (including capable of doing philosophy debates, like we are now) which solves the which pattern(s) problem?
Cannot work for induction: patterns are likely to continue in the future approaches cannot possibly work in the context of infinitely many patterns that don’t continue and no viable solution for choosing between patterns.
Cannot work for weighted factors: Dimension conversion to generic goodness only works approximately and only in special cases. Other dimension conversions are also special cases, though some aren’t approximate (like E=mc^2). Relying on dimension conversion cannot possibly work for a general purpose thinking system because it’s not generally available. Also, the concept of factor weights relies on the importance of the factor being approximately the same for different values of the factor, which is often false (both due to failure breakpoints and due to diminishing marginal utility).
Certainty: I’ve been trying to discuss fallibilist versions of CF, weighted factors and induction. Critiquing infallibilists wasn’t my focus. One of my last discussions with David Deutsch was actually about this, back in ~2013. From memory, he basically claimed that all justificationists (advocates of any kind of positive/supporting arguments) are infallibilists, which I denied. I brought up LessWrong people in general as an example, since they tend to be non-Popperian fallibilists. He claimed that they’re only fallibilists by contradicting themselves, which doesn’t really count or help. I was unable to find out from him what the alleged contradiction is between 1) fallibilism 2) positive/supporting/justifying arguments.
Duhem-Quine:
ok great. I don’t know who you were accusing, but generally speaking there are plenty of Popperians who I’m unimpressed by, so we might agree, idk.
Decisiveness
If I find two houses with four bedrooms and a fifteen minute commute, I can decide beteeen them using indecisive, nice-to-have features like a swimming pool as a further criterion.
I’m not forbidden from using decisive criteria, if that’s what they are. CRs and CFs are self-forbidden from using various things , though.
Decisive + indecisive criteria is better than decisive alone, because it enables.more fine grained decision making.
Then decisiveness.isn’t an objective criterion …it’s a question of setting up a threshhold, saying that 80% or 90% or 99% likelihood counts as decisiveness. Decisiveness is disguised weighting, if it isn’t infallibility.
Asymmetry
You cannot conclude it is certain, but you can conclude it is likely and.calculate a likeliihood.
Induction
Yes. But I can still choose the simplest that fits the data I currently have , Ie. I can do induction in a good-enough way.
I do not agree, it does, that’s the whole point. You start with the simplest, and move in to the next simplest, and so on.
We know that machines can induce, in a good-enough way, so there must be an algorithm for it.
Try it yourself. … imagine you are playing a game where you have to guess the next letter in a sequence. If the sequence starts “aaa...” You would naturally guess ” a” ”. Anyone would, because everyone can do basic induction. If it turned out the next letter was “b” you could guess that he pattern is
“aaabaaabaaab..”
or
“aaabbbaaabbb..”
or
“aaabbbbccccddd...”
And maybe some other possibilities. Notice that you are not certain which pattern is the right one. Notice also that you are not at a loss to come up with simple candidate patterns … the infinity of possible patterns isn’t impeding you. Notice also that you can still make a probablistic prediction, eg. 2⁄3 probability that the fifth letter will be a “b”.
″ But surely there are more than three candidate patterns! ” There are more complex patterns that fit, but they get low weighting because they are complex.
“But that’s Conjecture and Refutation!” Maybe it is! If you want to say induction cannot possibly work , and maintain that C&R does work, you need to show that induction isn’t a form of C&R. (And also that it’s failing at something that is actually claimed for it by inductionists).
The only valid objections to induction are that it doesn’t achieve some.kind of perfection , such as complete certainty, or complete generality
That was an objection from generality. Its irrelevant , because I only claimed that induction was capable of working predictively and probabilistically. That indeed does not work for certain high bar problems, but that should not be summarised as “cannot work at all”.
BTW, we dont know that what we are doing now is fully general. Maybe there are things human beings just can’t think of.
There is a way of choosing between patterns. It’s simplicity as I said. It can be shown to work,… so long as you are only aiming for probabilistic prediction. There’s an argument t that induction can’t tell you the exact laws of nature, with certainty, given a limited data set, but that much more ambitious than what I am talking about.
Weighting
If you are only trying to satisfy your only values, then the weighting is just how much you value things in relation to each other. Presumably, your objection is that the lack of objective criteria ..but if you are making a personal decision, why would that matter.
Justification
Yes, Deutsch is frustrating. He tends to state things without justification. That’s consistent with his rejection of justifcationism , but at the same time you generally need more than “my idea is by contradicted by anything” to motivate you change your mind. Which is an argument for justificationism from argumentative reasoning.
Per my article, decisiveness, like other idea evaluation, depends on the goal and context. “It costs $100” is decisive criticism for a $20 budget goal but not a $200 budget goal.
But this doesn’t use likelihoods or weights. It uses qualitative differences or breakpoints for quantities (which are the points where there difference in quantity makes a qualitative difference). The generic breakpoint is “good enough for success at my goal or not?”
You can do fine-grained decision making, without limitation, using decisive reasoning alone. And convenience comparisons or marginal benefits are irrelevant given my claim (which is currently an open issue under discussion) that indecisive reasoning doesn’t work at all.
Epistemology should be general purpose and cover impersonal issues like scientific controversies, and allow for productive debate rather than being subjective or arbitrary.
By no objective criteria do you mean people can and should just subjectively/intuitively make up the numbers with no math? If so, how can they do that? How would they or their intuition determine what numbers roughly feel right? By using intelligence via some other full general-purpose epistemology which has been used as a premise/prerequisite of this approach? My understanding is that for this kind of weighted factor math stuff to be a first epistemology – a first solution to how people think intelligently, as I believe its claimed to be – then the math has to work objectively and you can’t just rely on people somehow intelligently coming up with numbers that are in the right ballpark. If you rely on intelligence then it’s only a secondary method which leaves all the primary questions in epistemology open.
Also if the numbers are being made up non-objectively so they feel about right, why not just make up a conclusion that feels about right directly? What good is the intermediate step of making up the numbers?
There are many different versions of induction. If you pick a specific version of induction (preferably one with at least one book explaining it in detail like Popper’s books explain Critical Rationalism) then we can discuss how it differs from C&R, what it claims, and whether it lives up to those claims.
Which patterns are simplest? What’s the rule to judge that? Does applying the rule require intelligence as a prerequisite?
But you can’t expect any given context to supply you with a set of decisive criteria that narrow your options to one.
It uses an arbitrary threshold of decisiveness.
The examples you have given look.qualitative.
I don’t see how.
Epistemology quite possibly can’t be general purpose, in the sense that the same techniques apply to different kinds of problem.
I mean with subjective criteria.
They can do that. Asking how they do it doesn’t mean it’s impossible.
Different problems require different approaches. I’m not saying subjective weighting is the answer to everything
Non objective and made-up are not the same thing.
People do. I am not saying there is one method to rule them all.
If a thing is worth doing , it is worth doing with made up numbers.
Which is why it is difficult to show none of them could possibly work.
I have picked probabilistic prediction, which can be shown to work directly, without needing a theoretical justification.
You know the “aaaaa” pattern is simpler than the others. Its no great mystery.
People here like Kolmogorov complexity. That isn’t some unanswerable question.
You don’t need much intelligence to do simple induction, since simple organisms can do it.
Most goals have many solutions which we should be ~indifferent between – they all work and it’s not worth our time to optimize more.
In the cases where optimization is worthwhile and there are multiple solutions, we can narrow it down further by considering more ambitous goals.
As a simple approximation, looking only at viable solutions you want to optimize between, you may maximize one factor. Maximizing a single factor doesn’t require combining factors, dimension conversion, rank ordering or weighting, and keeps the method non-compensatory (a problem with one factor can’t be outweighed by some other factors being good). The problems with non-linear value functions are often quite manageable when dealing with only one non-binary factor. If you model decision making as multiplying many binary factors, you can also multiply in one non-binary factor without the problems that come from multiple non-binary factors. This gives you a simple answer which I don’t consider ideal but it’s mostly OK and doesn’t require reading essays to get a more complicated answer.
Budgets, or more generally goals, aren’t arbitrary and have breakpoints/thresholds inherent in them, which we should look for. The most generic threshold is “enough (or a low enough amount for negative factors) for goal success”.
My claim is it can’t be done other than via conjectures and refutations, CF, the stuff I’m advocating. I’m claiming that other methods don’t work. If people do it but you don’t know how, that is compatible with my claim, since they may be using the things I’m saying do work. This isn’t counter-evidence against me.
They have common themes, so it can be done using abstract arguments as long as people agree in broad strokes on what sorts of things are and aren’t induction. If you start loosening up the definition of “induction” to include C&R, that’s way too broad, and it’s no longer the same thing that Popper or I said doesn’t work, and it no longer fits the historical tradition/meaning of induction (unless we’re missing something, which you’d have to show).
My primary concern with literature isn’t the justification but just the specification of how it works. You haven’t provided a well-defined non-moving target for my criticism, as both CR and CF provide to you. Usually, even when highly abstract discussion is pretty effective (as is needed to cover induction generically), it’s still best to go over at least one more specific example, so if you could specify one version of induction in detail (preferably via cite) we could use it as an example.
I have an answer in that easy case that I believe I got via C&R. If you don’t give the math, then you aren’t showing that some non-C&R method can evaluate simplicity. And just because I have an answer in a few easy cases doesn’t mean that you or I have a good answer in harder cases.
Kolmogorov complexity is uncomputable and machine-dependent, right? So it’s not a usable approach. That people like it anyway is evidence about how hard the question is and how poor the known answers are.
I deny that humans can do induction. I also deny that simple organisms can do it. I doubt this is a good sub-topic to go into right now.
Many don’t: they have solutions that deliver worthwhile but but critical amounts of utility. So the one size fits all approach isnt going to work for them.
Your claim was that it could not possibly work at all.
Anyway: simple induction can be implemented by simple organisms and programmes. They are too simple to be deliberately making conjectures, but capable of running a hardwired algorithm that just expects the same result from the same cause.
Yes I have: better than chance prediction.
That doesn’t mean C&R is the only possible mechanism.
I’m not a fan myself, but it’s not like no one has any clue about how simplicity works.
Deny what it like, there’s evidence they do it
Chatgpt: Induction, in a broad sense, means learning a general rule or expectation from repeated experience rather than from a single fixed instinct. Many animal behaviours fit this pattern, even if they’re simpler than human reasoning. Here are some clear examples: Trial-and-error learning (generalising from outcomes) Rats in mazes learn that certain turns or paths tend to lead to food. Over time, they don’t just remember one route—they form a general expectation like “this direction usually pays off.” This kind of behaviour was famously studied by Edward Thorndike, who showed animals gradually “induce” successful strategies. Conditioning (predictive associations) In classical conditioning experiments by Ivan Pavlov, dogs learned that a bell predicts food. They generalise from repeated pairings to a rule: “bell → food is coming.” This is inductive because the animal infers a predictive relationship from repeated experience. Foraging decisions (learning patterns in the environment) Bees learn which flower colours or shapes tend to contain nectar. They don’t test every flower randomly forever—they generalise: “purple flowers here are usually rewarding.” This shows induction from multiple encounters to a probabilistic rule. Predator avoidance (learning danger cues) Birds that survive encounters with predators often learn to recognise certain shapes or movements (e.g., hawk silhouettes) as dangerous. They generalise from specific experiences to a broader category: “things like this are threats.” Habituation and sensitisation (learning what matters) Animals stop responding to repeated harmless stimuli (habituation), effectively “learning” that a stimulus predicts nothing important. Conversely, sensitisation increases response after significant events. Both involve extracting regularities from experience. Tool use and problem solving (higher-level induction) Some primates and corvids (like crows) learn rules about tools—for example, that sticks can retrieve food from holes. Over time, they apply this rule in new contexts, suggesting a more flexible, inductive generalisation. A useful distinction: Not all learned behaviour is equally “inductive.” Simple conditioning might just be association, while more complex behaviours (like those in crows or apes) come closer to forming abstract rules. But in all these cases, the key feature is the same: the animal uses past experiences to form expectations about new situations. .
When I said that, I was using standard definitions that excluded C&R.
“better than chance prediction” with the predictions done by what method? What is the math algorithm or flowchart? You’re still not providing specifics.
This is jumping ahead. Humans do something. Whether or not its induction depends on what induction is, which is a current conversation topic.
I’m not going to debate Chatgpt, and this is unhelpful when I’ve already read many versions of induction and don’t need an introductory summary. Is there no literature you can cite that you think writes down correct details of induction? The issue isn’t my familiarity with induction, it’s you picking a specific claim. Even if you’re unsure and think one of many inductivist positions may be right, you could still pick a single one for us to discuss in more detail. I can’t pick that for you but it needs to be picked for me to give more specific criticism.
What are your intentions with this discussion? I’d be open to trying to actually work through these issues and reach a conclusion. I’d be open to mutually agreeing to put in some effort. Right now, every time I reply, I don’t know if you’re ever going to reply again. I don’t think we’re going to resolve these issues quickly but I think the topics are important and I’m interested in trying seriously.