Your initial piece of argumentation was more readable than any of the later formalizations (either plaintext or graphical), but for more complex arguments the approach may have merit. I was immediately reminded of this ongoing attempt to map William Lane Craig’s argument for theism.
I’d like to emphasize that readability is not the goal—drawing correct conclusions is.
An argument based on a colorful analogy might be readable, but not sound. (In particular, I’m thinking of neural nets, genetic algorithms, and simulated annealing, each of which are motivated by a colorful analogy).
Suppose someone gives you a thousand-line piece of code, never previously compiled or run. Many pieces of code crash when compiled and run for the first time. Which is stronger evidence, a well-written textual argument that it will not crash, or running it once?
I have some notion that an argument tree could be translated or incorporated into a Bayes net model. There’s an intuition (which we share) that, given links with a particular imperfect strength, arguments consisting of a few long chains are weaker than arguments that are “bushy” (offering many independent reasons for the conclusion). A Bayes net model would quantify that intuition.
In a perfect world, bushiness would indeed imply high reliability. Unfortunately in our world the different branches of the bush can have hidden dependencies, either accidental or maliciously inserted—they could even all be subtly different rewordings of one same argument—and the technique won’t catch that. So ultimately I don’t think we have invented a substitute for common sense just yet.
Your initial piece of argumentation was more readable than any of the later formalizations (either plaintext or graphical), but for more complex arguments the approach may have merit. I was immediately reminded of this ongoing attempt to map William Lane Craig’s argument for theism.
I’d like to emphasize that readability is not the goal—drawing correct conclusions is.
An argument based on a colorful analogy might be readable, but not sound. (In particular, I’m thinking of neural nets, genetic algorithms, and simulated annealing, each of which are motivated by a colorful analogy).
Suppose someone gives you a thousand-line piece of code, never previously compiled or run. Many pieces of code crash when compiled and run for the first time. Which is stronger evidence, a well-written textual argument that it will not crash, or running it once?
With logical links as weak as those in your example, most arguments longer than 10 steps will reach incorrect conclusions anyway.
Agreed.
I have some notion that an argument tree could be translated or incorporated into a Bayes net model. There’s an intuition (which we share) that, given links with a particular imperfect strength, arguments consisting of a few long chains are weaker than arguments that are “bushy” (offering many independent reasons for the conclusion). A Bayes net model would quantify that intuition.
In a perfect world, bushiness would indeed imply high reliability. Unfortunately in our world the different branches of the bush can have hidden dependencies, either accidental or maliciously inserted—they could even all be subtly different rewordings of one same argument—and the technique won’t catch that. So ultimately I don’t think we have invented a substitute for common sense just yet.