Specify proof-format that has ambiguity (of cluster-like concepts, etc) be a part of the formalism, and has mappings between concepts and real world be part of formalism, and can reference output from functions as part of formalism.
Of course how much trust that is put in proof/argument-tree would depend on various things (allowing vague concepts makes it less trustable).
For cluster-like concepts referenced by proofs, a precise specification of criteria for exclusion/inclusion should not be expected, but the more the better. Inference rules and examples can specify the degree to which specific instances would fall within a specific concept or not (also allowed to say that some instances neither fall inside or outside of it).
One of the points would be to make as much as possible be within the realm of things where an AGI could be expected to output proofs that are easier to verify compared to other output.
My thinking is that this would be most helpful when combined with other techniques/design-principles. Like, outputting the proof (very formal argument with computable inference-steps) is one thing, but another thing is which techniques/processes that are used to look for problems with it (as well as looking for problem with formalism as a whole, as well as testing/predicting how hard/easy humans can be convinced of things that are false or contradictory given various conditions/specifics).
Bonus if these formal proofs/statements can be presented in ways where humans easily can read them.
Specify proof-format that has ambiguity (of cluster-like concepts, etc) be a part of the formalism, and has mappings between concepts and real world be part of formalism, and can reference output from functions as part of formalism.
Of course how much trust that is put in proof/argument-tree would depend on various things (allowing vague concepts makes it less trustable).
For cluster-like concepts referenced by proofs, a precise specification of criteria for exclusion/inclusion should not be expected, but the more the better. Inference rules and examples can specify the degree to which specific instances would fall within a specific concept or not (also allowed to say that some instances neither fall inside or outside of it).
One of the points would be to make as much as possible be within the realm of things where an AGI could be expected to output proofs that are easier to verify compared to other output.
My thinking is that this would be most helpful when combined with other techniques/design-principles. Like, outputting the proof (very formal argument with computable inference-steps) is one thing, but another thing is which techniques/processes that are used to look for problems with it (as well as looking for problem with formalism as a whole, as well as testing/predicting how hard/easy humans can be convinced of things that are false or contradictory given various conditions/specifics).
Bonus if these formal proofs/statements can be presented in ways where humans easily can read them.