Strongly in support. There are probably benefits we aren’t yet imagining from knowing the things that formal tools like this would help us know. (As a special case: Intelligence enhancement as existential risk mitigation.)
It will take a long time to build a community that knows how to use these formal tools. And most arguments can’t be represented on their object level in simple Bayesian networks or even Bayesian decision networks. Some forms of evidence or reasoning (especially assertions of expert opinion or human judgment) can fail in correlated ways which break independence assumptions. To formalize this would require hyperprior models for the reliability of each kind of possible connection from reality to observation, and possibly also hierarchical models of this reliability, to transfer evidence about reliability from arguments to similar arguments. In some arguments, some of the evidence could have been faked or selectively presented; to formalize this would require game theory. Other arguments are about public policy and would need game theory to model the incentives that policies would create. Some arguments might be too complex to formalize completely or compute formal conclusions to; these might require meta-arguments about reliability of approximations. A user community will need experience with many arguments before it can know which formalizations can be used in what situations.
Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
What evidence would convince you personally that some purported Seed AGI was in fact Friendly?
Sound arguments are central to Friendly AGI research. The arguments cannot be too long, either. If someone hands you a giant proof, and you accept it as evidence without understanding it, then your implicit argument is something like “from expert testimony” or “long proofs that look valid to spot-checks are likely to be sound”.
Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.
Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making?
yes, for example.
I was just pointing out that the two are similar problems: improving human group decisionmaking using formal tools vs. making a thinking machine. The implications of this are complex, and could include both dangers and benefits. One benefit is that as AI gets more advanced, better decisionmaking tools will become available. For example, the notion of a “cognitive bias” probably depends causally upon us having a notion of normative rationality to compare humans with.
Strongly in support. There are probably benefits we aren’t yet imagining from knowing the things that formal tools like this would help us know. (As a special case: Intelligence enhancement as existential risk mitigation.)
Seconding the recommendation to learn about Bayesian inference engines like SMILE, PyMC (use with ipython in pylab mode for interactive analysis and plotting), the Matlab Bayes Net Toolbox, VIBES, JAGS, or the Open Source Probabilistic Networks Library, and play with interfaces like GeNIe, the Matlab interface to BUGS, the R interface to BUGS, BUGS itself (if you have an experienced user to look over your shoulder and explain which buttons need to be pushed in what order after clicking on which model source code windows or typing variable names into which boxes), or the Netica demo version.
It will take a long time to build a community that knows how to use these formal tools. And most arguments can’t be represented on their object level in simple Bayesian networks or even Bayesian decision networks. Some forms of evidence or reasoning (especially assertions of expert opinion or human judgment) can fail in correlated ways which break independence assumptions. To formalize this would require hyperprior models for the reliability of each kind of possible connection from reality to observation, and possibly also hierarchical models of this reliability, to transfer evidence about reliability from arguments to similar arguments. In some arguments, some of the evidence could have been faked or selectively presented; to formalize this would require game theory. Other arguments are about public policy and would need game theory to model the incentives that policies would create. Some arguments might be too complex to formalize completely or compute formal conclusions to; these might require meta-arguments about reliability of approximations. A user community will need experience with many arguments before it can know which formalizations can be used in what situations.
Also related: David Brin’s disputation arenas essay, and the extensive hierarchically organized resources list on the EU Mapping Controversies on Science for Politics project-funded Mapping Controversies web site.
Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
What evidence would convince you personally that some purported Seed AGI was in fact Friendly?
Sound arguments are central to Friendly AGI research. The arguments cannot be too long, either. If someone hands you a giant proof, and you accept it as evidence without understanding it, then your implicit argument is something like “from expert testimony” or “long proofs that look valid to spot-checks are likely to be sound”.
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.
Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.
yes, for example.
I was just pointing out that the two are similar problems: improving human group decisionmaking using formal tools vs. making a thinking machine. The implications of this are complex, and could include both dangers and benefits. One benefit is that as AI gets more advanced, better decisionmaking tools will become available. For example, the notion of a “cognitive bias” probably depends causally upon us having a notion of normative rationality to compare humans with.