I think it’s more subtle. In mathematical logic, there’s a few things that can happen to a theory:
It can prove a falsehood. That’s bad: the theory is busted.
It can prove itself consistent. That’s bad too: it implies the theory is inconsistent, by the second incompleteness theorem.
It can prove itself inconsistent. That’s not necessarily bad: the silly theory PA+¬Con(PA), which asserts its own inconsistency, is actually equiconsistent with PA. But it suggests that the theory has a funny relationship with reality (in this case, that any model of it must include some nonstandard integers).
Overall it seems we should prefer theories that don’t say anything much about their own justifications one way or the other. I suspect the right approach in philosophy is the same.
Most of the examples I’m talking about are more like proving false or proving your own finitistic inconsistency than failing to prove your own consistency. Like, if your theory implies a strong (possibly probabilistic) argument that your theory is false, that’s almost like proving false.
Godel’s incompleteness theorem doesn’t rule out finitistic self consistency proofs, e.g
ability to prove in length n that there is no inconsistency proof of length up to n^2. Logical inductors also achieve this kind of finitistic self trust. I think this is usually a better fit for real world problems than proving infinitary consistency.
Sure, but I don’t see why such self-trust is a good sign. All inconsistent theories have proofs of finitistic self-consistency up to n that are shorter than n (for some n), but only some consistent theories do. So seeing such a proof is Bayesian evidence in favor of inconsistency.
I think it’s more subtle. In mathematical logic, there’s a few things that can happen to a theory:
It can prove a falsehood. That’s bad: the theory is busted.
It can prove itself consistent. That’s bad too: it implies the theory is inconsistent, by the second incompleteness theorem.
It can prove itself inconsistent. That’s not necessarily bad: the silly theory PA+¬Con(PA), which asserts its own inconsistency, is actually equiconsistent with PA. But it suggests that the theory has a funny relationship with reality (in this case, that any model of it must include some nonstandard integers).
Overall it seems we should prefer theories that don’t say anything much about their own justifications one way or the other. I suspect the right approach in philosophy is the same.
Most of the examples I’m talking about are more like proving false or proving your own finitistic inconsistency than failing to prove your own consistency. Like, if your theory implies a strong (possibly probabilistic) argument that your theory is false, that’s almost like proving false.
Godel’s incompleteness theorem doesn’t rule out finitistic self consistency proofs, e.g ability to prove in length n that there is no inconsistency proof of length up to n^2. Logical inductors also achieve this kind of finitistic self trust. I think this is usually a better fit for real world problems than proving infinitary consistency.
Sure, but I don’t see why such self-trust is a good sign. All inconsistent theories have proofs of finitistic self-consistency up to n that are shorter than n (for some n), but only some consistent theories do. So seeing such a proof is Bayesian evidence in favor of inconsistency.