Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
What evidence would convince you personally that some purported Seed AGI was in fact Friendly?
Sound arguments are central to Friendly AGI research. The arguments cannot be too long, either. If someone hands you a giant proof, and you accept it as evidence without understanding it, then your implicit argument is something like “from expert testimony” or “long proofs that look valid to spot-checks are likely to be sound”.
Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.
Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making?
yes, for example.
I was just pointing out that the two are similar problems: improving human group decisionmaking using formal tools vs. making a thinking machine. The implications of this are complex, and could include both dangers and benefits. One benefit is that as AI gets more advanced, better decisionmaking tools will become available. For example, the notion of a “cognitive bias” probably depends causally upon us having a notion of normative rationality to compare humans with.
Reading this comment makes me think that the problem of building formal tools to aid inference is um, not exactly a million miles away from the object-level problem of building AGI. Hierarchical models, bayes nets, meta-arguments about reliability of approximations. Perhaps the next thing we’ll be asking for is some way to do interventions in the real world to support causal reasoning?
What evidence would convince you personally that some purported Seed AGI was in fact Friendly?
Sound arguments are central to Friendly AGI research. The arguments cannot be too long, either. If someone hands you a giant proof, and you accept it as evidence without understanding it, then your implicit argument is something like “from expert testimony” or “long proofs that look valid to spot-checks are likely to be sound”.
Do you mean formal arguments that someone should do an experiment because knowledge of the results would improve future decision-making? These would be a special case of formal arguments that someone should do something.
Do you mean that if someone automated the experiments there might be an AI danger? I know a lot of arguments for ways there could be dangers similar to AI dangers from these kinds of tools, but they are mostly about automation of the reasoning.
yes, for example.
I was just pointing out that the two are similar problems: improving human group decisionmaking using formal tools vs. making a thinking machine. The implications of this are complex, and could include both dangers and benefits. One benefit is that as AI gets more advanced, better decisionmaking tools will become available. For example, the notion of a “cognitive bias” probably depends causally upon us having a notion of normative rationality to compare humans with.