These are fascinating apps, but I just know Wittgenstein is spinning in his grave.
“Because the claims are natural language text, the structure truthmapper enforces is looser than a syllogism; merely a tree of claims and supporting claims.”
For me, this is the rub: The truthmapper format, which combines the structure of syllogisms with the tumult of online communities and the opacity and weakness of language, invites a kind of cargo cult logic, where things are called premises and conclusions but sound like UN General Assembly resolutions:
“Premise 1: We are all born free and equal in dignity and rights!
Premise 2: We are not equal!
Conclusion: Revolution!”
These ideas are ambitious, and some progeny of these experiments may turn out to be the next Wikipedia, but you’d have a hard time satisfying me that discourse was being elevated until all the arguments on truthmapper are presented as first-order symbolic logic and all the content of the assertions is written in Lojban. Until then, clear writing and frank, iterated assertions in natural language are probably preferable.
A syllogism is three lines, each containing a quantifier, a subject, and a possibly negated predicate. It is a really rigid form of argument, and not tree-like at all. You may be thinking of a sorites, which is a bunch of syllogisms put together. Tree structured arguments are incredibly common in all kinds of logic, proof theory, and argumentation theory. Leaping from “tree-shaped” to “sorites” is like leaping from “flattish” to “flat-earthers”.
Regardless of my nitpicking, I agree with you: we need progeny of these experiments. I may disagree about the details (predicate logic? lojban?!).
Thank you for the information on syllogisms. I know I was using the term wrong below, and I really should have known better. It may be nitpicking, but I think rationalists more than others are probably interested in making sure they use words correctly.
If you’re familiar with Lojban, I’d be very interested in a post on how you think it would or wouldn’t help with rationality.
These are fascinating apps, but I just know Wittgenstein is spinning in his grave.
“Because the claims are natural language text, the structure truthmapper enforces is looser than a syllogism; merely a tree of claims and supporting claims.”
For me, this is the rub: The truthmapper format, which combines the structure of syllogisms with the tumult of online communities and the opacity and weakness of language, invites a kind of cargo cult logic, where things are called premises and conclusions but sound like UN General Assembly resolutions:
“Premise 1: We are all born free and equal in dignity and rights! Premise 2: We are not equal! Conclusion: Revolution!”
These ideas are ambitious, and some progeny of these experiments may turn out to be the next Wikipedia, but you’d have a hard time satisfying me that discourse was being elevated until all the arguments on truthmapper are presented as first-order symbolic logic and all the content of the assertions is written in Lojban. Until then, clear writing and frank, iterated assertions in natural language are probably preferable.
A syllogism is three lines, each containing a quantifier, a subject, and a possibly negated predicate. It is a really rigid form of argument, and not tree-like at all. You may be thinking of a sorites, which is a bunch of syllogisms put together. Tree structured arguments are incredibly common in all kinds of logic, proof theory, and argumentation theory. Leaping from “tree-shaped” to “sorites” is like leaping from “flattish” to “flat-earthers”.
Regardless of my nitpicking, I agree with you: we need progeny of these experiments. I may disagree about the details (predicate logic? lojban?!).
Thank you for the information on syllogisms. I know I was using the term wrong below, and I really should have known better. It may be nitpicking, but I think rationalists more than others are probably interested in making sure they use words correctly.
If you’re familiar with Lojban, I’d be very interested in a post on how you think it would or wouldn’t help with rationality.