So, as a student (not as a researcher), I actually got acquainted with DCPOs and Scott domains, specifically in the context of denotation semantics.
One of my teachers was actually quite interested in general (non-Haussdorf) manifolds.
That’s awesome! I’d love to understand more about manifolds in this context. We managed at some point to progress to reconciling Scott domains with linear algebra (with some implications for machine learning), but progressing to manifolds is something else.
I think this is precisely one of the realms where formalism shines.
I expect it would have been impossible for the field to take off without formalism.
There are so many broken intuitions in non-metric geometry! If we were left at the intuitions as being the ground truth, the field could not have succeeded. (Even in beginner metric geometry, it is easy to get screwed when one is not clearly introduced to specific examples where the definitions of closed/bounded/compact differ.)
There are just too many different types of spaces/topologies/manifolds, with different axioms behind them. The “true objects” here have little to do with standard numbers, intuitive euclidean spaces et al.
Yes, absolutely. But only when one is engaging in the interplay between the formal and the informal.
The informal and the formal have to go together, at least when humans practice math.
To the extent one has “true objects” in minds, these object are of a purely formal nature.
The question is, what is formal. Are categories and toposes well defined? A follower of Brouwer and Heyting would tell you “no”.
And can one work with the objects before their formalization is fully elucidated? One often has to, otherwise one would often be unable to obtain that formalization.
When one is talking about the cutting edge math, math which is in the process of being created, the full formalization is often unavailable, yet progress needs to be made.
back to the beginning:
Regardless of how they feel about non-intuitive proofs, when these proofs contradict their intuitions, the non-intuitive proofs will still be the source of truth.
Yes, absolutely, but it will be a very incomplete truth.
The intuition about what is connected to what is an important part of the actual mathematical material even if that intuition is not fully reified into definitions, axioms, and theorems.
So yes, this is a valid source of mathematical truth, but all kinds of informal considerations also form important mathematical material and empower people’s ability to obtain the formalized part of truth as well. E.g. if one looks at the Langlands program, one is clearly seeing how much the informal part dominates and leads, and how the technical part gradually follows, conjectures evolving and changing before they become theorems.
From the simple example of my own studies, if one wants to obtain generalized metrization of Scott topology, this task is obviously very informal. Who knows what this generalized metrization might be a priori… All one understands at the beginning is that ordinary metrics would not do because the topology is non-Hausdorff. Then one needs to search, and one first finds that the literature contains the “George Washington bridge distance” quasi-metrics, where an asymmetric distance is non-zero in one direction and zero in the opposite direction, just like the toll on that bridge. And then one proves that a topologically correct generalized metrization can be obtained this way, but when one is also trying to figure out how to compute this generalized distance, one encounters obstacles. And then one realizes that the axiom d(x,x)=0 is incompatible with distance being monotonic with respect to both variables, and, therefore, is incompatible with distance being Scott continuous with respect to both variables, and Scott continuity is the abstract version of computability in this field. And then one proceeds to (re)discover generalized metrics which don’t have the d(x,x)=0 axiom obtaining (symmetric) relaxed metrics and partial metrics which can be made Scott continuous with respect to both variables and can be made computable, and so on.
The informal part always leads, formalization can’t be fruitful without it (although perhaps different kinds of minds which are not like minds of human mathematicians could fruitfully operate in an entirely formal universe, who knows).
You mention Hasse diagrams. They are a modern purely formal construct.
Only if the set is finite.
If one wants to represent an infinite partially ordered set in this fashion, one has to use an informal “Hasse-like sketch” which is not well-defined (as far as I know, although I would be super-curious if one could extend the formal construct of Hasse diagrams to those cases).
I am going to link to a few of the examples of these informal Hasse diagrams. I am going to use this text originally from 2002 (and the diagrams are all from 1990-s): https://arxiv.org/abs/1512.03868
On page 15 (page 30 of the PDF file), we see an informal “Hasse diagram” for interval numbers within a segment. On page 16 (page 31 of the PDF file), this “Hasse diagram” is used in an even more informal fashion (with dashed lines to indicate that these lines are not a part of the depicted upper set Va,b).
On page 103 (page 118 of the PDF file), we see a more discrete infinite partial order, yet even this more discrete case is, I believe, beyond the power of formalized “Hasse diagrams” (it’s not an accident that most elements have to be omitted and replaced by ellipsis-like dots looking like small letters “o”).
I think this is more the start of a pleasant long discussion over beers (or tea/coffee) than of a thread that would take a lot of time to write, and that very few people would be interested in.
Feel free to DM me on Twitter (I am more responsive there) if you are ever in Europe, I think it would be fun!
But very quickly, to not leave you pending:
If one wants to represent an infinite partially ordered set in this fashion, one has to use an informal “Hasse-like sketch” which is not well-defined (as far as I know, although I would be super-curious if one could extend the formal construct of Hasse diagrams to those cases).
I agree, and I think that they work as pointer for people who already know the corresponding formalisations and their limitations.
If used in other situations, I expect they largely do not work. Concretely, I expect that readers/students/attendees will reliably come out with wrong intuitions, and that proofs with tend to include mistakes.
I have seen “intuitive” diagrams hiding mistakes so many times, both in ML and in type theory. Whether they represent proof derivations with holes, neural network architectures with holes, calculations with holes, infinite graphs with holes, etc. The infamous “x1 + x2 + … + xn” that is not always well defined.
The informal part always leads, formalization can’t be fruitful without it (although perhaps different kinds of minds which are not like minds of human mathematicians could fruitfully operate in an entirely formal universe, who knows).
Yup, I was going to say something similar.
I believe this is more an artefact of how we humans operate than of the truth of things.
For instance, I believe that it is possible to write an automated theorem prover that proves most of the theorem pre-15th century, and that it is impossible for people pre-15th century to prove most of the things that this automated theorem prover could prove.
In practice, we humans are quite biased towards positing the existence of entities behind any perceived regularity. (Spirits, phlogiston, ethers, new-age energies, etc.)
Usually, these entities do not exist, and there was no coherent deep generalisation behind the regularity.
Sometimes, the regularities are not even there, and we just hallucinated them.
I think maths is roughly similar. That there are things, but that said things are not Plato’s squares and standard numbers, but just equivalence classes of symbolic manipulations. We may perceive regularities through our intuition, and sometimes have intuitions precise enough to be useful, but it’s all downstream of that source of truth.
I will certainly grant you the main point: we see tons of errors, even in the texts written by leading mathematicians. There is still plenty of uncertainty about validity of some core results.
But that’s precisely why it is premature to say that we have entered the formalist era. When people start routinely verifying in automated theorem provers, that’s when we’ll be able to say that we are in the formalist era.
changing the order a bit:
That’s awesome! I’d love to understand more about manifolds in this context. We managed at some point to progress to reconciling Scott domains with linear algebra (with some implications for machine learning), but progressing to manifolds is something else.
Yes, absolutely. But only when one is engaging in the interplay between the formal and the informal.
The informal and the formal have to go together, at least when humans practice math.
The question is, what is formal. Are categories and toposes well defined? A follower of Brouwer and Heyting would tell you “no”.
And can one work with the objects before their formalization is fully elucidated? One often has to, otherwise one would often be unable to obtain that formalization.
When one is talking about the cutting edge math, math which is in the process of being created, the full formalization is often unavailable, yet progress needs to be made.
back to the beginning:
Yes, absolutely, but it will be a very incomplete truth.
The intuition about what is connected to what is an important part of the actual mathematical material even if that intuition is not fully reified into definitions, axioms, and theorems.
So yes, this is a valid source of mathematical truth, but all kinds of informal considerations also form important mathematical material and empower people’s ability to obtain the formalized part of truth as well. E.g. if one looks at the Langlands program, one is clearly seeing how much the informal part dominates and leads, and how the technical part gradually follows, conjectures evolving and changing before they become theorems.
From the simple example of my own studies, if one wants to obtain generalized metrization of Scott topology, this task is obviously very informal. Who knows what this generalized metrization might be a priori… All one understands at the beginning is that ordinary metrics would not do because the topology is non-Hausdorff. Then one needs to search, and one first finds that the literature contains the “George Washington bridge distance” quasi-metrics, where an asymmetric distance is non-zero in one direction and zero in the opposite direction, just like the toll on that bridge. And then one proves that a topologically correct generalized metrization can be obtained this way, but when one is also trying to figure out how to compute this generalized distance, one encounters obstacles. And then one realizes that the axiom d(x,x)=0 is incompatible with distance being monotonic with respect to both variables, and, therefore, is incompatible with distance being Scott continuous with respect to both variables, and Scott continuity is the abstract version of computability in this field. And then one proceeds to (re)discover generalized metrics which don’t have the d(x,x)=0 axiom obtaining (symmetric) relaxed metrics and partial metrics which can be made Scott continuous with respect to both variables and can be made computable, and so on.
The informal part always leads, formalization can’t be fruitful without it (although perhaps different kinds of minds which are not like minds of human mathematicians could fruitfully operate in an entirely formal universe, who knows).
Only if the set is finite.
If one wants to represent an infinite partially ordered set in this fashion, one has to use an informal “Hasse-like sketch” which is not well-defined (as far as I know, although I would be super-curious if one could extend the formal construct of Hasse diagrams to those cases).
I am going to link to a few of the examples of these informal Hasse diagrams. I am going to use this text originally from 2002 (and the diagrams are all from 1990-s): https://arxiv.org/abs/1512.03868
On page 15 (page 30 of the PDF file), we see an informal “Hasse diagram” for interval numbers within a segment. On page 16 (page 31 of the PDF file), this “Hasse diagram” is used in an even more informal fashion (with dashed lines to indicate that these lines are not a part of the depicted upper set Va,b).
On page 103 (page 118 of the PDF file), we see a more discrete infinite partial order, yet even this more discrete case is, I believe, beyond the power of formalized “Hasse diagrams” (it’s not an accident that most elements have to be omitted and replaced by ellipsis-like dots looking like small letters “o”).
I think this is more the start of a pleasant long discussion over beers (or tea/coffee) than of a thread that would take a lot of time to write, and that very few people would be interested in.
Feel free to DM me on Twitter (I am more responsive there) if you are ever in Europe, I think it would be fun!
But very quickly, to not leave you pending:
I agree, and I think that they work as pointer for people who already know the corresponding formalisations and their limitations.
If used in other situations, I expect they largely do not work. Concretely, I expect that readers/students/attendees will reliably come out with wrong intuitions, and that proofs with tend to include mistakes.
I have seen “intuitive” diagrams hiding mistakes so many times, both in ML and in type theory. Whether they represent proof derivations with holes, neural network architectures with holes, calculations with holes, infinite graphs with holes, etc. The infamous “x1 + x2 + … + xn” that is not always well defined.
Yup, I was going to say something similar.
I believe this is more an artefact of how we humans operate than of the truth of things.
For instance, I believe that it is possible to write an automated theorem prover that proves most of the theorem pre-15th century, and that it is impossible for people pre-15th century to prove most of the things that this automated theorem prover could prove.
In practice, we humans are quite biased towards positing the existence of entities behind any perceived regularity. (Spirits, phlogiston, ethers, new-age energies, etc.)
Usually, these entities do not exist, and there was no coherent deep generalisation behind the regularity.
Sometimes, the regularities are not even there, and we just hallucinated them.
I think maths is roughly similar. That there are things, but that said things are not Plato’s squares and standard numbers, but just equivalence classes of symbolic manipulations. We may perceive regularities through our intuition, and sometimes have intuitions precise enough to be useful, but it’s all downstream of that source of truth.
Thanks!
I will certainly grant you the main point: we see tons of errors, even in the texts written by leading mathematicians. There is still plenty of uncertainty about validity of some core results.
But that’s precisely why it is premature to say that we have entered the formalist era. When people start routinely verifying in automated theorem provers, that’s when we’ll be able to say that we are in the formalist era.