a) code to generate conjectures
b) code to test the conjectures
c) code to reject bad conjectures , and go back a)
Whereas I only need to write b)
I actually don’t need an argument because you are the one claiming a distinction
Here’s the argument supporting the claim, again:-
“Note that because its a white box, you can can show there is is no time T, in its execution, where the conjecture that the patterns will repeat is formed, as opposed to a previous time where it hasn’t ….It expects repeating patterns from boot up”
In this case, E’ itself is a probabilistic statement (letter frequency) which can be true or false, and it is guaranteed to be true if H is true.
Why does that matter?
Uncertainty is a subjective feeling, and it still needs to be demonstrated that this feeling can be modeled by probability
It sometimes can, since probability sometimes works. Maybe it sometimes d
oesn’t, but I sent see how that results in a sweeping deposit of Induction.
It is the job of the Bayesian to prove we can model all uncertainty
Im not defending Bayesianism in that sense, as I said.
As I mentioned before, bickering over definitions was never Popper’s intention
Maybe smuggling in definitions without inconvenient bickering was the intention...you are not automatically on the epistemological high ground when you refuse to engage in “semantics”
Given what I just said, perhaps it’s better to rephrase my question: what phenomenon remains unaccounted for without a distinction between inductive reasoning and conjecture
The ability of agents too simple to form conjectures to nonetheless perform inductive reasoning.
. I am sure the Popper-Miller theorem is valid given that it is not called a conjecture or a blunder
By its authors. But a number of criticisms and counterarguments have been published, eg:-
Perhaps you know that the Popper-Miller argument has a serious logical flaw identified by Richard Jeffrey in his 1983 book “The Logic
of Decision” when it was first published in their 1983 letter to Nature ? Popper and Miller seemed to just ignore the flaw and republish it in PTRSL four years later.
Their argument can be summarised as follows. They seek to establish whether a hypothesis H acquires inductive support under Bayesian
theory by the evidence E. H can be expressed logically as (H or not E) and (H or E). H or not E is equivalent to the statement “H is true given E is true” or simply “H if E”. Now (H or E) is implied by E trivially so they focus on how E supports (H if E). Now Pr ((H if E) given E) is
clearly ⇐ Pr(H if E) so that (H if E) never is
incrementally confirmed by E.
The flaw lies in their claim that “H if E” is that part of H over
and above E (i.e. H and not E). It is in fact all of H as well as all of that which is not E.
By the way, an argument can be valid mathematically , but still fail to represent the real world. Conveniently, Vasrani’s argument has that property.
Philosophers tend to find the math confusing, and mathematicians tend to find the philosophy confusing.
If Popper and Miller have both competencies , others could as well.,
...but I cant* see how that results in a sweeping deposit of Induction.
It doesn’t, but it was at least convincing to me that probabilistic reasoning is much more vague than it makes itself out to be.
I’m not defending Bayesianism in that sense, as I said.
Sounds good.
...you are not automatically on the epistemological high ground when you refuse to engage in “semantics”
Agreed, but choosing to focus on the referent rather than the sense while acknowledging the different senses is the ‘high ground’ as you said, and it is an explicit engaging in semantics. I’m happy to discard or adopt terms if they are shown to be obfuscating or useful respectively.
“Whereas I only need to write b)...
″...Here’s the argument supporting the claim, again:-
‘Note that because its a white box, you can can show there is is no time T, in its execution, where the conjecture that the patterns will repeat is formed, as opposed to a previous time where it hasn’t ….It expects repeating patterns from boot up’...”
″...The ability of agents too simple to form conjectures to nonetheless perform inductive reasoning...”
Perfect, I’m lumping these together because I’m realizing this is the crux and perhaps you can consolidate further. I apologize if I didn’t adequately respond to your other instantiations of these.
For your a)b)c) program, I was only talking about conjectures in that thread, so I would only need to write (a). Is (a) necessarily more complicated than whatever mechanism you have for induction? Also, for me (b) only consists of deductive falsifications, so what you call “induction” would still be part of (a).
For your white box example, it’s not clear to me how initialized expectations are not the same as conjectural dispositions.
For simple agential models which cannot conjecture but still perform inductive reasoning, I’m curious what mechanisms you think are sufficient for conjecture and what mechanisms are necessary for induction? Obviously, for very simple agents, “conjecturing” and “reasoning” aren’t exactly writing down logical statements in English. We’re probably talking about encoding information somehow? Inductive bias, like how ML systems work?
But a number of criticisms and counterarguments have been published
Yeah, and I’ll definitely be looking into those as well. I look forward to it!
By the way, an argument can be valid mathematically , but still fail to represent the real world. Conveniently, Vasrani’s argument has that property.
Totally agree with the first part.
If Popper and Miller have both competencies , others could as well.
Definitely, and I hope to be one, but the discourse around it does not inspire confidence.
...but I cant* see how that results in a sweeping deposit of Induction.
It doesn’t, but it was at least convincing to me that probabilistic reasoning is much more vague than it makes itself out to be.
Why is that interesting to me? AFAIC ,the debate is about whether induction works. So I’m not interested in general point scoring against Bayes or probability
For your a)b)c) program, I was only talking about conjectures in that thread, so I would only need to write (a).
forming conjectures without any attempt to refute or support them is not knowledge generation.
Also, for me (b) only consists of deductive falsifications, so what you call “induction” would still be part of (a).
I’m stipulating that b) is a simple inductor.
Obviously, for very simple agents, “conjecturing” and “reasoning” aren’t exactly writing down logical statements in English. We’re probably talking about encoding information somehow? Inductive bias, like how ML systems work?
No, it’s just doing something in a hard coded way. Not generating an English level description of what to do, interpreting it, and executing it.
Because either you are not updating credence (which I have no objection to), or you can’t distinguish between hypotheses without assuming simplicity as an axiom (which, feel free to do so, but I already argued it doesn’t need to be assumed). But I think this train of thought seems less important than the necessity of induction discussion in the other threads.
Why is that interesting to me?
It doesn’t need to be. I just found it more compelling.
forming conjectures without any attempt to refute or support them is not knowledge generation.
Totally agree. So I think we may have talked past each other a bit because I was only comparing induction to conjecture, not the full knowledge-generation process. Sure (b) alone is simpler than (a), (b), and (c) collectively, but that’s not what I was arguing against.
I’m stipulating that b) is a simple inductor.
Okay, well that’s a bit of a bedrock of disagreement then.
No, it’s just doing something in a hard coded way. Not generating an English level description of what to do, interpreting it, and executing it.
Sure, so what is your sufficient condition for conjecture to be present, and what is your necessary condition for induction to be present?
can’t distinguish between hypotheses without assuming simplicity as an axiom (which, feel free to do so, but I already argued it doesn’t need to be assumed).
So have I.:-
There are more complex conjectures than simple ones. So if you conjecture something complex, it is less likely to be the right conjecture. Also, you have only a finite amount of time to consider conjectures, so you can’t start at the end an infinite list..But you can start with th the simplest conjecture. Of course, that’s roughly how Solomonoff induction works.
(Also, it is completely unclear why “having to assume simplicity” amounts to “not working”. You could argue, as Vasrani does that Bayes without simplicity doesn’t work: I have argued that no real Bayesian ignores simplicity).
but that’s not what I was arguing against
Why not? An aircraft without wing s or engine is sim ple, but it can’t fly.
Okay, well that’s a bit of a bedrock of disagreement then
Because you think I was stipulating something else? Because you think there are no simple inductors?
Sure, so what is your sufficient condition for conjecture to be present, and what is your necessary condition for induction to be present
You can tell that a algorithm is making predictions on a black box basis , and you can tell it’s an inductor if it does immediately on boot up.
A conjecture-and-refutation machine has to be complex enough to form high level representations, and make inferences from them.
I think in each of these threads, we’ve started to go in circles, so if it’s any consolation I’m interested in following your future posts, and if I post anything in the future I would be interested to see your critiques.
Yes, obviously. You need to write
a) code to generate conjectures b) code to test the conjectures c) code to reject bad conjectures , and go back a)
Whereas I only need to write b)
Here’s the argument supporting the claim, again:-
“Note that because its a white box, you can can show there is is no time T, in its execution, where the conjecture that the patterns will repeat is formed, as opposed to a previous time where it hasn’t ….It expects repeating patterns from boot up”
Why does that matter?
It sometimes can, since probability sometimes works. Maybe it sometimes d oesn’t, but I sent see how that results in a sweeping deposit of Induction.
Im not defending Bayesianism in that sense, as I said.
Maybe smuggling in definitions without inconvenient bickering was the intention...you are not automatically on the epistemological high ground when you refuse to engage in “semantics”
The ability of agents too simple to form conjectures to nonetheless perform inductive reasoning.
By its authors. But a number of criticisms and counterarguments have been published, eg:-
By the way, an argument can be valid mathematically , but still fail to represent the real world. Conveniently, Vasrani’s argument has that property.
If Popper and Miller have both competencies , others could as well.,
Because you gave an example that didn’t work?
It doesn’t, but it was at least convincing to me that probabilistic reasoning is much more vague than it makes itself out to be.
Sounds good.
Agreed, but choosing to focus on the referent rather than the sense while acknowledging the different senses is the ‘high ground’ as you said, and it is an explicit engaging in semantics. I’m happy to discard or adopt terms if they are shown to be obfuscating or useful respectively.
Perfect, I’m lumping these together because I’m realizing this is the crux and perhaps you can consolidate further. I apologize if I didn’t adequately respond to your other instantiations of these.
For your a)b)c) program, I was only talking about conjectures in that thread, so I would only need to write (a). Is (a) necessarily more complicated than whatever mechanism you have for induction? Also, for me (b) only consists of deductive falsifications, so what you call “induction” would still be part of (a).
For your white box example, it’s not clear to me how initialized expectations are not the same as conjectural dispositions.
For simple agential models which cannot conjecture but still perform inductive reasoning, I’m curious what mechanisms you think are sufficient for conjecture and what mechanisms are necessary for induction? Obviously, for very simple agents, “conjecturing” and “reasoning” aren’t exactly writing down logical statements in English. We’re probably talking about encoding information somehow? Inductive bias, like how ML systems work?
Yeah, and I’ll definitely be looking into those as well. I look forward to it!
Totally agree with the first part.
Definitely, and I hope to be one, but the discourse around it does not inspire confidence.
Why didn’t it work?
Why is that interesting to me? AFAIC ,the debate is about whether induction works. So I’m not interested in general point scoring against Bayes or probability
forming conjectures without any attempt to refute or support them is not knowledge generation.
I’m stipulating that b) is a simple inductor.
No, it’s just doing something in a hard coded way. Not generating an English level description of what to do, interpreting it, and executing it.
Because either you are not updating credence (which I have no objection to), or you can’t distinguish between hypotheses without assuming simplicity as an axiom (which, feel free to do so, but I already argued it doesn’t need to be assumed). But I think this train of thought seems less important than the necessity of induction discussion in the other threads.
It doesn’t need to be. I just found it more compelling.
Totally agree. So I think we may have talked past each other a bit because I was only comparing induction to conjecture, not the full knowledge-generation process. Sure (b) alone is simpler than (a), (b), and (c) collectively, but that’s not what I was arguing against.
Okay, well that’s a bit of a bedrock of disagreement then.
Sure, so what is your sufficient condition for conjecture to be present, and what is your necessary condition for induction to be present?
So have I.:-
(Also, it is completely unclear why “having to assume simplicity” amounts to “not working”. You could argue, as Vasrani does that Bayes without simplicity doesn’t work: I have argued that no real Bayesian ignores simplicity).
Why not? An aircraft without wing s or engine is sim ple, but it can’t fly.
Because you think I was stipulating something else? Because you think there are no simple inductors?
You can tell that a algorithm is making predictions on a black box basis , and you can tell it’s an inductor if it does immediately on boot up.
A conjecture-and-refutation machine has to be complex enough to form high level representations, and make inferences from them.
I think in each of these threads, we’ve started to go in circles, so if it’s any consolation I’m interested in following your future posts, and if I post anything in the future I would be interested to see your critiques.