Indeed the theory “combustion occurs when something from the substance goes into the air” is simpler...
Seems like a simpler theory. Is a shorter sentence.
The mind isn’t going to have any way to detect that neutrinos have mass (even if it suspects that) until it sees evidence that they oscillate. Etc.
Sure, knowledge increases far more than arithmetically with additions of either smarts or data.
But an actual good Bayesian should not need to specially test a hypothesis once the pre-existing evidence has singled it out as extremely likely.
Only if he or she was sub-optimal when gathering data is this true. If, when doddering about, you narrowed down to a hypothesis, smarten up and you could probably determine if it’s true. If you were doing your best, someone smarter in the relevant way probably can. A Bayesian of any quality must improve beyond the level of smarts that led him or her to merely single out a hypothesis to judge it well.
they won’t bring one hypothesis to the front, they will often have a fair number of hypotheses to explain based on incomplete data.
I think this feels like having no idea at all, with no conscious hypothesis.
in his case hitting on the simplest hypothesis that hit a large set of nice conditions
Doing this feels like reasoning, or concentrating, or even zoning out, I suspect. It is varyingly subconscious pruning a tree of hypotheses, not conscious searching through them one by one and getting lucky by stumbling on a good one.
But often the actual process will be that they need more data.
Usually not once one consciously notices a finite set of hypotheses. Data can substitute for better thinking, there’s no law against that, but it’s not a “need” in many senses of the term.
So this seems like a more valid point: There are problems of human cognitive biases that go in the other direction
The point is that human minds just aren’t efficient processors, biases aside. “The” point (there are multiple ones) isn’t about overcoming bias, it’s that a well designed AI brain would need computing resources less than those of our brains to be smarter than any human.
Seems like a simpler theory. Is a shorter sentence
Yes, simplicity of English language is not at all a good metric of actual simplicity for a decent prior. However, in this particular case, both account for the same qualitative observations, and I strongly suspect that if one did try to make these into some formal system one would find that the second hypothesis is actually more complicated since it has a conjunction.
I need to think more about the rest of your remarks more before responding. I think I agree with most of them.
(And right now I’m really tempted to pretend to be an internet crank and start going around the internet preaching that phlogiston is correct).
I don’t think I agree. To be equivalent the summary of the phlogiston hypothesis would also have to include that air has a definite, limited capacity for burned phlogiston and no other known substance does, nor does vacuum.
Seems like a simpler theory. Is a shorter sentence.
Sure, knowledge increases far more than arithmetically with additions of either smarts or data.
Only if he or she was sub-optimal when gathering data is this true. If, when doddering about, you narrowed down to a hypothesis, smarten up and you could probably determine if it’s true. If you were doing your best, someone smarter in the relevant way probably can. A Bayesian of any quality must improve beyond the level of smarts that led him or her to merely single out a hypothesis to judge it well.
I think this feels like having no idea at all, with no conscious hypothesis.
Doing this feels like reasoning, or concentrating, or even zoning out, I suspect. It is varyingly subconscious pruning a tree of hypotheses, not conscious searching through them one by one and getting lucky by stumbling on a good one.
Usually not once one consciously notices a finite set of hypotheses. Data can substitute for better thinking, there’s no law against that, but it’s not a “need” in many senses of the term.
The point is that human minds just aren’t efficient processors, biases aside. “The” point (there are multiple ones) isn’t about overcoming bias, it’s that a well designed AI brain would need computing resources less than those of our brains to be smarter than any human.
And they wouldn’t be limited to that.
Yes, simplicity of English language is not at all a good metric of actual simplicity for a decent prior. However, in this particular case, both account for the same qualitative observations, and I strongly suspect that if one did try to make these into some formal system one would find that the second hypothesis is actually more complicated since it has a conjunction.
I need to think more about the rest of your remarks more before responding. I think I agree with most of them.
(And right now I’m really tempted to pretend to be an internet crank and start going around the internet preaching that phlogiston is correct).
I don’t think I agree. To be equivalent the summary of the phlogiston hypothesis would also have to include that air has a definite, limited capacity for burned phlogiston and no other known substance does, nor does vacuum.