From professional experience (I’ve been a programmer since the 80′s and was paid for it from the 90′s onward) I agree with you entirely re. graphical representation. That doesn’t keep generation after generation of tool vendors crowing that thanks to their new insight, programming will finally be made easy thanks to “visual this, that or the other”. UML being the latest such to have a significant impact.
You have me pondering what we might gain from whipping up a Domain-Specific Language (say, in a DSL-friendly base language such as Ruby) to represent arguments in. It couldn’t be too hard to bake some basics of Bayesian inference into that.
PyMC is a DSL in python for (non-recursive) Bayesian models and Bayesian probability computations. I have been thinking of trying to make an ad-hoc collaborative scenario projection tool with PyMC and ikiwiki. Users would edit Literate Python (e.g. PyLit or Ly) wiki pages that defined PyMC model modules, and ikiwiki triggers would maintain Monte Carlo sampling computations and update results pages. But it won’t be enough for real argument mapping without decision theory (and possibly some other things).
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford—controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling techniques to handle the ill-defined aspects of the domain.
Notably in both the cars and the helicopters, a lot of the domain definition is done implicitly, by learning from expert humans (drivers or stunt pilots). The resulting representation of domain models is explicit but messy. However it is subject to investigation, refinement, etc. as needed to make it work well enough to handle the target domain.
Both of these examples use Bayesian semantics, but go well beyond cookbook Bayesian approaches, and use control theory, some fairly fancy model acquisition techniques, etc.
There is a lot of relevant tech out there if Less Wrong is really serious about its mission. I haven’t seen much attempt to pursue it yet.
I strongly support the notion of whipping up a DSL for argumention targeted at LessWrong readers. Philosophy and law argumentation tools seem to be targeting users without any math or logic who demand a graphical interface as the primary means of creating argument. My guess is that LessWrong readers would be more tolerant of Bayesian math and formal logic, the necessity of learning a little syntax, and only exporting a graphical representation.
Features might include:
Compose in ordinary ASCII or UTF-8
Compose primarily a running-text argument, indicating the formal structure with annotations
Export as a prettified document, still mostly running text (html and LaTeX)
Export as a diagram (automatically layed out, perhaps by graphviz)
Export as a bayes net (in possibly several bayes net formats)
Export as a machine-checkable proof (in possibly several formats)
I’m currently learning noweb, the literate programming tool by Norman Ramsey.
From professional experience (I’ve been a programmer since the 80′s and was paid for it from the 90′s onward) I agree with you entirely re. graphical representation. That doesn’t keep generation after generation of tool vendors crowing that thanks to their new insight, programming will finally be made easy thanks to “visual this, that or the other”. UML being the latest such to have a significant impact.
You have me pondering what we might gain from whipping up a Domain-Specific Language (say, in a DSL-friendly base language such as Ruby) to represent arguments in. It couldn’t be too hard to bake some basics of Bayesian inference into that.
PyMC is a DSL in python for (non-recursive) Bayesian models and Bayesian probability computations. I have been thinking of trying to make an ad-hoc collaborative scenario projection tool with PyMC and ikiwiki. Users would edit Literate Python (e.g. PyLit or Ly) wiki pages that defined PyMC model modules, and ikiwiki triggers would maintain Monte Carlo sampling computations and update results pages. But it won’t be enough for real argument mapping without decision theory (and possibly some other things).
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford—controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling techniques to handle the ill-defined aspects of the domain.
Notably in both the cars and the helicopters, a lot of the domain definition is done implicitly, by learning from expert humans (drivers or stunt pilots). The resulting representation of domain models is explicit but messy. However it is subject to investigation, refinement, etc. as needed to make it work well enough to handle the target domain.
Both of these examples use Bayesian semantics, but go well beyond cookbook Bayesian approaches, and use control theory, some fairly fancy model acquisition techniques, etc.
There is a lot of relevant tech out there if Less Wrong is really serious about its mission. I haven’t seen much attempt to pursue it yet.
I strongly support the notion of whipping up a DSL for argumention targeted at LessWrong readers. Philosophy and law argumentation tools seem to be targeting users without any math or logic who demand a graphical interface as the primary means of creating argument. My guess is that LessWrong readers would be more tolerant of Bayesian math and formal logic, the necessity of learning a little syntax, and only exporting a graphical representation.
Features might include:
Compose in ordinary ASCII or UTF-8
Compose primarily a running-text argument, indicating the formal structure with annotations
Export as a prettified document, still mostly running text (html and LaTeX)
Export as a diagram (automatically layed out, perhaps by graphviz)
Export as a bayes net (in possibly several bayes net formats)
Export as a machine-checkable proof (in possibly several formats)
I’m currently learning noweb, the literate programming tool by Norman Ramsey.
Well visual programing of visual things, is good. but thats just WYSIWYG.