Sometimes, though, it gives an interesting insight into what’s going on—often cases where classical logic tells us that an inference is just fine, but informal pragmatics tell us that there is something silly about it.
It’s an interesting experience to learn formal logic and then take a higher-level math class (any proof-intensive topic). During the process of finding a proof, we ask all sorts of questions of the form “does that imply that?”. However, since we’re typically proving something which we already know is a theorem, we could logically answer: “Yes: any two true statements imply one another, and both of those statements are true.” This is a silly and unhelpful reply, of course. One way of seeing why is to point out that although we may already be willing to believe the theorem, we are trying to construct an argument which could increase the certainty of that belief; hence, the direction of propagation is towards the theorem, so any belief we may have in the theorem cannot be used as evidence in the argument.
What do you think, am I overstepping my bounds here? I feel like the probabilistic case gives us something more. In classical logic, we either believe the statement already or not; we don’t need to worry about counting evidence twice because evidence is either totally convincing or not convincing at all.
Because in casual speech the question doesn’t actually mean “does that imply that?”, but rather “do we have a derivation of that from that, using our set of inference rules?” Not the same, but people seldom realise the distinction.
This is the “paradox of the material conditional”, which is one of the primary motivations of relevance logic—to provide a sentential connective that corresponds to how we actually use “implies”, as opposed to the material (truth-functional) implication.
That is also a fair interpretation, especially for those students who just want to get the homework done with and don’t really care about increasing their sureness in the theorem being re-proved.
If we additionally care about the argument and agree with all the inference rules, then I think there is a little more explaining to do.
Not only for the students, I think. Confusion between implication and inference was enough widespread to motivate Lewis Carroll to write an essay, and nothing much changed has since then. I didn’t properly understand the distinction even after finishing university.
Another example (perhaps a bit frivolous): when browsing Less Wrong comments and deciding which to upvote, it might be tempting to take the existing upvotes into account. However (to an extent—it’s just an analogy), this is like using the fact that my probability for some statement A is high as an argument to increase the probability of A. The direction of propagation is towards the estimated goodness of the post, so using that estimate in the argument is bad form.
Archimedes did not know that gravity was caused by the Earth’s mass. His only mistake was overconfidence about the cause of gravity, which can be seen from Bayesian reasoning, not just informal pragmatics.
Can you please give an example of this?
It’s an interesting experience to learn formal logic and then take a higher-level math class (any proof-intensive topic). During the process of finding a proof, we ask all sorts of questions of the form “does that imply that?”. However, since we’re typically proving something which we already know is a theorem, we could logically answer: “Yes: any two true statements imply one another, and both of those statements are true.” This is a silly and unhelpful reply, of course. One way of seeing why is to point out that although we may already be willing to believe the theorem, we are trying to construct an argument which could increase the certainty of that belief; hence, the direction of propagation is towards the theorem, so any belief we may have in the theorem cannot be used as evidence in the argument.
What do you think, am I overstepping my bounds here? I feel like the probabilistic case gives us something more. In classical logic, we either believe the statement already or not; we don’t need to worry about counting evidence twice because evidence is either totally convincing or not convincing at all.
Because in casual speech the question doesn’t actually mean “does that imply that?”, but rather “do we have a derivation of that from that, using our set of inference rules?” Not the same, but people seldom realise the distinction.
This is the “paradox of the material conditional”, which is one of the primary motivations of relevance logic—to provide a sentential connective that corresponds to how we actually use “implies”, as opposed to the material (truth-functional) implication.
http://plato.stanford.edu/entries/logic-relevance/
Good point! Perhaps you won’t be surprised, though, if I say that my own preferred account of the conditional is the probabilistic conditional.
That is also a fair interpretation, especially for those students who just want to get the homework done with and don’t really care about increasing their sureness in the theorem being re-proved.
If we additionally care about the argument and agree with all the inference rules, then I think there is a little more explaining to do.
Not only for the students, I think. Confusion between implication and inference was enough widespread to motivate Lewis Carroll to write an essay, and nothing much changed has since then. I didn’t properly understand the distinction even after finishing university.
Another example (perhaps a bit frivolous): when browsing Less Wrong comments and deciding which to upvote, it might be tempting to take the existing upvotes into account. However (to an extent—it’s just an analogy), this is like using the fact that my probability for some statement A is high as an argument to increase the probability of A. The direction of propagation is towards the estimated goodness of the post, so using that estimate in the argument is bad form.
Give me a long enough lever and a place to stand...
Archimedes did not know that gravity was caused by the Earth’s mass. His only mistake was overconfidence about the cause of gravity, which can be seen from Bayesian reasoning, not just informal pragmatics.