Human minds form various abstractions over our environment. These abstractions are sometimes fuzzy (too large to fit into working memory) or leaky (they can fail).
Mathematics is the study of what happens when your abstractions are completely non-fuzzy (always fit in working memory) and completely non-leaky (never fail). And also the study of which abstractions can do that.
Working memory bounds isn’t super related to non-fuzzy-ness, as you can have a window which slides over context and is still rigorous at every step. Absolute local validity due to well-specifiedness of axioms and rules of inference is closer to the core.
(realised you mean that the axioms and rules of inference are in working memory, not the whole tower, retracted)
Mathematics is the study of what happens when your abstractions are completely non-fuzzy (always fit in working memory)
I don’t think that’s true, every mathematical insight started out as an intuitive guess that may or may not turn out to be wrong pending the painstaking work of completing the proof. The whole proof does not fit in the working memory. It can be learned and recited but that doesn’t mean it’s all in working memory at the same time.
(This is a brainstorm-type post which I’m not highly confident in, putting out there so I can iterate. Thanks for replying and helping me think about it!)
I don’t mean that the entire proof fits into working memory, but that the abstractions involved in the proof do. Philosophers might work with a concept like “the good” which has a few properties immediately apparent but other properties available only on further deep thought. Mathematicians work with concepts like “group” or “4” whose properties are immediately apparent, and these are what’s involved in proofs. Call these fuzzy / non-fuzzy concepts.
(Philosophers often reflect on their concepts, like “the good,” and uncover new important properties, because philosophy is interested in intuitions people have from their daily experience. But math requires clear up-front definitions; if you reflect on your concept and uncover new important properties not logically entailed from the others, you’re supposed to use a new definition.)
I guess I glossed over it because in applied conceptual engineering fields like code (and maybe physics? (or is this more about the fuzzyiness of the mapping to the physical world)) (or maybe even applied math sometimes), where plenty of math is done, there are always still lots of situations where the abstraction stops fitting in working memory because it’s grown too complex for most of the people who work with it to fully understand its definitions.
Also maybe I’m assuming math is gonna get like that too once AI mathematicians start to work? (And I’ve always felt like there should be a lot more automation in math than there is)
Human minds form various abstractions over our environment. These abstractions are sometimes fuzzy (too large to fit into working memory) or leaky (they can fail).
Mathematics is the study of what happens when your abstractions are completely non-fuzzy (always fit in working memory) and completely non-leaky (never fail). And also the study of which abstractions can do that.
Working memory bounds isn’t super related to non-fuzzy-ness, as you can have a window which slides over context and is still rigorous at every step. Absolute local validity due to well-specifiedness of axioms and rules of inference is closer to the core.
(realised you mean that the axioms and rules of inference are in working memory, not the whole tower, retracted)
I don’t think that’s true, every mathematical insight started out as an intuitive guess that may or may not turn out to be wrong pending the painstaking work of completing the proof. The whole proof does not fit in the working memory. It can be learned and recited but that doesn’t mean it’s all in working memory at the same time.
(This is a brainstorm-type post which I’m not highly confident in, putting out there so I can iterate. Thanks for replying and helping me think about it!)
I don’t mean that the entire proof fits into working memory, but that the abstractions involved in the proof do. Philosophers might work with a concept like “the good” which has a few properties immediately apparent but other properties available only on further deep thought. Mathematicians work with concepts like “group” or “4” whose properties are immediately apparent, and these are what’s involved in proofs. Call these fuzzy / non-fuzzy concepts.
(Philosophers often reflect on their concepts, like “the good,” and uncover new important properties, because philosophy is interested in intuitions people have from their daily experience. But math requires clear up-front definitions; if you reflect on your concept and uncover new important properties not logically entailed from the others, you’re supposed to use a new definition.)
Hmm you’re right, that’s a distinction.
I guess I glossed over it because in applied conceptual engineering fields like code (and maybe physics? (or is this more about the fuzzyiness of the mapping to the physical world)) (or maybe even applied math sometimes), where plenty of math is done, there are always still lots of situations where the abstraction stops fitting in working memory because it’s grown too complex for most of the people who work with it to fully understand its definitions.
Also maybe I’m assuming math is gonna get like that too once AI mathematicians start to work? (And I’ve always felt like there should be a lot more automation in math than there is)