It’s not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you’re using logarithms).
Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don’t say, “I don’t know.” You know a little.
A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam’s razor.)
The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
Solomonoff’s Lightsaber is the right way to think about this.
More direct evidence can “screen off” indirect evidence. If it’s along the same causal chain, you’re not allowed to count it twice.
Many so-called “logical fallacies” are correct Bayesian inferences.
Many so-called “logical fallacies” are correct Bayesian inferences.
I find this a very interesting claim and wondering if anyone has applied it to some list of logical fallacies such as one might find listed in some Intro to Logic text book.
I’m assuming that one could get all that from reading through all the Sequences but sees to me a cheat sheet type document would be much more helpful.
It’s not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you’re using logarithms).
Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don’t say, “I don’t know.” You know a little.
A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam’s razor.)
The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
Solomonoff’s Lightsaber is the right way to think about this.
More direct evidence can “screen off” indirect evidence. If it’s along the same causal chain, you’re not allowed to count it twice.
Many so-called “logical fallacies” are correct Bayesian inferences.
I find this a very interesting claim and wondering if anyone has applied it to some list of logical fallacies such as one might find listed in some Intro to Logic text book.
I’m assuming that one could get all that from reading through all the Sequences but sees to me a cheat sheet type document would be much more helpful.
Wikipedia has a list. Note that even the “informal” fallacies are often “so-called ‘logical fallacies’”.
Fallacies as weak Bayesian evidence had some good exposition on a few of them from a Bayesian perspective. There could be more under the fallacies tag.
There’s also some discussion under Logical fallacy poster.