It’s an interesting experience to learn formal logic and then take a higher-level math class (any proof-intensive topic). During the process of finding a proof, we ask all sorts of questions of the form “does that imply that?”. However, since we’re typically proving something which we already know is a theorem, we could logically answer: “Yes: any two true statements imply one another, and both of those statements are true.” This is a silly and unhelpful reply, of course. One way of seeing why is to point out that although we may already be willing to believe the theorem, we are trying to construct an argument which could increase the certainty of that belief; hence, the direction of propagation is towards the theorem, so any belief we may have in the theorem cannot be used as evidence in the argument.
What do you think, am I overstepping my bounds here? I feel like the probabilistic case gives us something more. In classical logic, we either believe the statement already or not; we don’t need to worry about counting evidence twice because evidence is either totally convincing or not convincing at all.
Because in casual speech the question doesn’t actually mean “does that imply that?”, but rather “do we have a derivation of that from that, using our set of inference rules?” Not the same, but people seldom realise the distinction.
This is the “paradox of the material conditional”, which is one of the primary motivations of relevance logic—to provide a sentential connective that corresponds to how we actually use “implies”, as opposed to the material (truth-functional) implication.
That is also a fair interpretation, especially for those students who just want to get the homework done with and don’t really care about increasing their sureness in the theorem being re-proved.
If we additionally care about the argument and agree with all the inference rules, then I think there is a little more explaining to do.
Not only for the students, I think. Confusion between implication and inference was enough widespread to motivate Lewis Carroll to write an essay, and nothing much changed has since then. I didn’t properly understand the distinction even after finishing university.
It’s an interesting experience to learn formal logic and then take a higher-level math class (any proof-intensive topic). During the process of finding a proof, we ask all sorts of questions of the form “does that imply that?”. However, since we’re typically proving something which we already know is a theorem, we could logically answer: “Yes: any two true statements imply one another, and both of those statements are true.” This is a silly and unhelpful reply, of course. One way of seeing why is to point out that although we may already be willing to believe the theorem, we are trying to construct an argument which could increase the certainty of that belief; hence, the direction of propagation is towards the theorem, so any belief we may have in the theorem cannot be used as evidence in the argument.
What do you think, am I overstepping my bounds here? I feel like the probabilistic case gives us something more. In classical logic, we either believe the statement already or not; we don’t need to worry about counting evidence twice because evidence is either totally convincing or not convincing at all.
Because in casual speech the question doesn’t actually mean “does that imply that?”, but rather “do we have a derivation of that from that, using our set of inference rules?” Not the same, but people seldom realise the distinction.
This is the “paradox of the material conditional”, which is one of the primary motivations of relevance logic—to provide a sentential connective that corresponds to how we actually use “implies”, as opposed to the material (truth-functional) implication.
http://plato.stanford.edu/entries/logic-relevance/
Good point! Perhaps you won’t be surprised, though, if I say that my own preferred account of the conditional is the probabilistic conditional.
That is also a fair interpretation, especially for those students who just want to get the homework done with and don’t really care about increasing their sureness in the theorem being re-proved.
If we additionally care about the argument and agree with all the inference rules, then I think there is a little more explaining to do.
Not only for the students, I think. Confusion between implication and inference was enough widespread to motivate Lewis Carroll to write an essay, and nothing much changed has since then. I didn’t properly understand the distinction even after finishing university.