Trivial Mathematics as a Path Forward

A gentleman by the name of Dan Willard, who recently passed, was a pioneer in a relatively unexamined area of mathematics: Mathematical frameworks that are just trivial enough to bypass Gödel. That is, mathematical frameworks that can prove their own consistency. He did so for a reason which may be of interest here: Creating AI that can verify itself.

I stumbled across him for an entirely unrelated reason, related to my username. (Noticing that, insofar as the universe is a mathematical system, things seem like they might get kind of weird if Gödel applies. I had this weird idea about a perfectly bijective mathematical framework and started investigating whether or not it would be subject to Gödel—answer: unclear, as I found this guy instead, and started thinking about his ideas.)

Willard’s approach is of course one option—creating mathematics that simply lacked the precise mechanics that run into Gödel, namely, multiplication and addition (he has them, but they’re defined in a way that avoids the issue). Thus, the approach is necessarily incomplete—that is, it cannot prove every true statement about the arithmetic of the natural numbers.

But since no finite (compressible to a finite algorithmic basis or set of bases) mathematical system can, we can’t exactly hold that against it. (And also, I’m left with a personally annoyed sense of “Wait, that was always an option?”)

Thus, contrary to a naive understanding of Gödel, we can in fact create a mathematical framework that can prove its own consistency!

This doesn’t imply an easy solution to the Halting Problem, which would be the next step; we could cripple a language in a similar fashion to avoid the disproofs—say, by assigning a finite integer index N to each function with a lower bound of 0, and only permitting functions to call functions with a lower index. Is the halting problem decidable?

Well, if you can express the Goldbach Conjecture in it, I’m going to lean towards “No.” The way that addition and multiplication are defined may preclude the expression of the Goldbach Conjecture, but it doesn’t quickly look like this is the case to my eye. (There are some modifications to the Turing Machine I have considered that look like they might make the problem tractable, including the above indices, but they also complicate the heck out of any proofs, as they prevent induction within the framework. Also they’re necessarily Turing-incomplete.)

There is thus a trade-off between something that rhymes with “power”, and something that rhymes with “safety”. However, there’s not necessarily a reason that the mechanisms responsible for decision-making need to be as powerful as the mechanisms response for reasoning. It seems preferable to have consistent preferences over the ability to express more powerful preferences—and this isn’t necessarily incompatible with more powerful reasoning capabilities.

So, as a path forward, give consideration to Willard’s approach, at least with respect to decision theory: Perhaps the most useful mathematical framework in which to express decision theory is not the more powerful one, but the provably self-consistent one.