I think this is more the start of a pleasant long discussion over beers (or tea/coffee) than of a thread that would take a lot of time to write, and that very few people would be interested in.
Feel free to DM me on Twitter (I am more responsive there) if you are ever in Europe, I think it would be fun!
But very quickly, to not leave you pending:
If one wants to represent an infinite partially ordered set in this fashion, one has to use an informal “Hasse-like sketch” which is not well-defined (as far as I know, although I would be super-curious if one could extend the formal construct of Hasse diagrams to those cases).
I agree, and I think that they work as pointer for people who already know the corresponding formalisations and their limitations.
If used in other situations, I expect they largely do not work. Concretely, I expect that readers/students/attendees will reliably come out with wrong intuitions, and that proofs with tend to include mistakes.
I have seen “intuitive” diagrams hiding mistakes so many times, both in ML and in type theory. Whether they represent proof derivations with holes, neural network architectures with holes, calculations with holes, infinite graphs with holes, etc. The infamous “x1 + x2 + … + xn” that is not always well defined.
The informal part always leads, formalization can’t be fruitful without it (although perhaps different kinds of minds which are not like minds of human mathematicians could fruitfully operate in an entirely formal universe, who knows).
Yup, I was going to say something similar.
I believe this is more an artefact of how we humans operate than of the truth of things.
For instance, I believe that it is possible to write an automated theorem prover that proves most of the theorem pre-15th century, and that it is impossible for people pre-15th century to prove most of the things that this automated theorem prover could prove.
In practice, we humans are quite biased towards positing the existence of entities behind any perceived regularity. (Spirits, phlogiston, ethers, new-age energies, etc.)
Usually, these entities do not exist, and there was no coherent deep generalisation behind the regularity.
Sometimes, the regularities are not even there, and we just hallucinated them.
I think maths is roughly similar. That there are things, but that said things are not Plato’s squares and standard numbers, but just equivalence classes of symbolic manipulations. We may perceive regularities through our intuition, and sometimes have intuitions precise enough to be useful, but it’s all downstream of that source of truth.
I will certainly grant you the main point: we see tons of errors, even in the texts written by leading mathematicians. There is still plenty of uncertainty about validity of some core results.
But that’s precisely why it is premature to say that we have entered the formalist era. When people start routinely verifying in automated theorem provers, that’s when we’ll be able to say that we are in the formalist era.
I think this is more the start of a pleasant long discussion over beers (or tea/coffee) than of a thread that would take a lot of time to write, and that very few people would be interested in.
Feel free to DM me on Twitter (I am more responsive there) if you are ever in Europe, I think it would be fun!
But very quickly, to not leave you pending:
I agree, and I think that they work as pointer for people who already know the corresponding formalisations and their limitations.
If used in other situations, I expect they largely do not work. Concretely, I expect that readers/students/attendees will reliably come out with wrong intuitions, and that proofs with tend to include mistakes.
I have seen “intuitive” diagrams hiding mistakes so many times, both in ML and in type theory. Whether they represent proof derivations with holes, neural network architectures with holes, calculations with holes, infinite graphs with holes, etc. The infamous “x1 + x2 + … + xn” that is not always well defined.
Yup, I was going to say something similar.
I believe this is more an artefact of how we humans operate than of the truth of things.
For instance, I believe that it is possible to write an automated theorem prover that proves most of the theorem pre-15th century, and that it is impossible for people pre-15th century to prove most of the things that this automated theorem prover could prove.
In practice, we humans are quite biased towards positing the existence of entities behind any perceived regularity. (Spirits, phlogiston, ethers, new-age energies, etc.)
Usually, these entities do not exist, and there was no coherent deep generalisation behind the regularity.
Sometimes, the regularities are not even there, and we just hallucinated them.
I think maths is roughly similar. That there are things, but that said things are not Plato’s squares and standard numbers, but just equivalence classes of symbolic manipulations. We may perceive regularities through our intuition, and sometimes have intuitions precise enough to be useful, but it’s all downstream of that source of truth.
Thanks!
I will certainly grant you the main point: we see tons of errors, even in the texts written by leading mathematicians. There is still plenty of uncertainty about validity of some core results.
But that’s precisely why it is premature to say that we have entered the formalist era. When people start routinely verifying in automated theorem provers, that’s when we’ll be able to say that we are in the formalist era.