For example, we can add to PA some axiom that rules out its standard model, but leaves many nonstandard ones. The simplest example would be to add the inconsistency of PA—the resulting theory will (counterintuitively) be just as consistent as PA, but quite weird. [...] Can the AI recognize such situations and say “no way, this formal system doesn’t seem to describe my regular integers”?
This seems to be a matter of some relatively straightforward human heuristics. A theory that is somehow directly “talking about itself” via its axioms looks weird and suspicious. And not to mention what this axiom would look like if you actually wrote it down alongside the standard PA axioms in the same format!
Note that when these heuristics don’t apply, humans end up confused and in disagreement about what should be considered as “our regular” stuff when faced with various independent statements that look like potential axioms. (As with Euclid’s fifth postulate, the continuum hypothesis, the axiom of choice, etc.)
It will have many interesting provable theorems that are nevertheless common-sensically “false”, e.g. “PA proves 1+1=3″.
More specifically, it will prove: “There exists a natural number that, when decoded according to your Goedelization scheme, yields a proof in PA that 1+1=3.” However, it won’t prove anything about that number that would actually allow you to write it down and see what that proof is. (“Writing down” a natural number essentially means expressing a theorem about a specific relation it has with another number that you use as the number system base.) This suggests another straightforward heuristic: if the theory asserts the existence of objects with interesting properties about which it refuses to say anything more that would enable us to study them, it’s not “our regular” kind of thing.
If the theory asserts the existence of objects with interesting properties about which it refuses to say anything more that would enable us to study them, it’s not “our regular” kind of thing.
Does this heuristic say that Chaitin’s constant isn’t a regular real number?
Good point. I should have said “bizarre properties” or something to that effect. Which of course leads us to the problem of what exactly qualifies as such. (There’s certainly lots of seemingly bizarre stuff easily derivable in ordinary real analysis.) So perhaps I should discard the second part of my above comment.
But I think my main point still stands: when there is no obvious way to see what choice of axioms (pun unintended) is “normal” using some heuristics like these, humans are also unable to agree what theory is the “normal” one. Differentiating between “normal” theories and “pathological” ones like PA + ~Con(PA) is ultimately a matter of some such heuristics, not some very deep insight. That’s my two cents, in any case.
cousin_it:
This seems to be a matter of some relatively straightforward human heuristics. A theory that is somehow directly “talking about itself” via its axioms looks weird and suspicious. And not to mention what this axiom would look like if you actually wrote it down alongside the standard PA axioms in the same format!
Note that when these heuristics don’t apply, humans end up confused and in disagreement about what should be considered as “our regular” stuff when faced with various independent statements that look like potential axioms. (As with Euclid’s fifth postulate, the continuum hypothesis, the axiom of choice, etc.)
More specifically, it will prove: “There exists a natural number that, when decoded according to your Goedelization scheme, yields a proof in PA that 1+1=3.” However, it won’t prove anything about that number that would actually allow you to write it down and see what that proof is. (“Writing down” a natural number essentially means expressing a theorem about a specific relation it has with another number that you use as the number system base.) This suggests another straightforward heuristic: if the theory asserts the existence of objects with interesting properties about which it refuses to say anything more that would enable us to study them, it’s not “our regular” kind of thing.
Does this heuristic say that Chaitin’s constant isn’t a regular real number?
Good point. I should have said “bizarre properties” or something to that effect. Which of course leads us to the problem of what exactly qualifies as such. (There’s certainly lots of seemingly bizarre stuff easily derivable in ordinary real analysis.) So perhaps I should discard the second part of my above comment.
But I think my main point still stands: when there is no obvious way to see what choice of axioms (pun unintended) is “normal” using some heuristics like these, humans are also unable to agree what theory is the “normal” one. Differentiating between “normal” theories and “pathological” ones like PA + ~Con(PA) is ultimately a matter of some such heuristics, not some very deep insight. That’s my two cents, in any case.