The author does a good job articulating his views on why Buddhist concentration and insight practices can lead to psychological benefits. As somebody who has spent years practicing these practices and engaging with various types of (Western) discourse about them, the author’s psychological claims seem plausible to a point. He does not offer a compelling mechanism for why introspective awareness of sankharas should lead to diminishing them. He also offers no account for why if insight does dissolve psychological patterns, it would preferentially dissolve negative patterns while leaving positive patterns unchanged. In my own opinion this has a lot more to do with the set and setting of the meditations practice, i.e., the expectation that practice will have salutary effects.
I am not convinced that this is a faithful “translation” of the Buddha’s teachings. He leaves out any talk of achieving liberation from rebirth which is the overarching goal of Buddhist practice in the original texts. He does not discuss the phenomenon of cessation/nirvana and whether it is necessary (according to the Buddha it is necessary). He also does not address the fact that the Buddha was not aiming to teach path of psychological health and wellbeing in our modern sense. Far from it, they idea that one could be happy and satisfied (in an ordinary psychological sense) was certainly recognized by the Buddha and his followers, but this was not seen as the goal of practice. In my view, the biggest misrepresentation of Buddhist ideology in it’s appropriation by the West was it’s construal as a a secular wellness path rather than an extreme doctrine that denies any value in ordinary happiness.
Polytopos
David Spivak offers an account of Categories as database schemas with path equivalencies that is similar to the account you’ve given here in his book Category Theory for the Sciences. He still presents the traditional definitions, giving examples mainly from the category of sets and functions. I also didn’t find his presentation of database schema definition especially easy to understand, but it is very useful when you realize that a functor is a systematic migration of data between schemas.
This is a fascinating article about how the concept of originality differs in some Eastern cultures https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original
I disagree with the idea that one doesn’t have intuitions about generalization if one hasn’t studied mathematics. One things that I find so interesting about CT is that it is so general it applies as much to everyday common sense concepts as it does to mathematical ones. David Spivak’s ontology logs are a great illustration of this.
I do agree that there isn’t a really good beginners book that covers category theory in a general way. But there are some amazing YouTube lectures. I got started on CT with this series, Category Theory for Beginners. The videos are quite long, but the lecturer does an amazing job explaining all the difficult concepts with lots of great visual diagrams. What is great about this series is that despite the “beginners” in the title he actually covers many more advanced topics such as adjunction, Yoneda’s lemma, and topos theory in a way that doesn’t presuppose prior mathematical knowledge.
In terms of books, Conceptual Mathematics really helped me with the basics of sets and functions, although it doesn’t get into the more abstract stuff very much. Finally, Category Theory for Programmers is quite accessible if you have any background in computer programming.
according to the story that your brain is telling, there is some phenomenology to it. But there isn’t.
Doesn’t this assume that we know what sort of thing phenomenological consciousness (qualia) is supposed to be so that we can assert that the story the brain is telling us about qualia somehow fails to measure up to this independent standard of qualia-reality?
The trouble I have with this is that there is no such independent standard for what phenomenal blueness has to be in order to count as genuinely phenomenal. The only standard we have for identifying something as an instance of the kind qualia is to point to something occurring in our experience. Given this, it remains difficult to understand how the story the brain tells about qualia could fail to be the truth, and nothing but the truth, about qualia (given the physicalist assumption that all our experience can be exhaustively explained through the brain’s activity).
I see blue and pointing to the experience of this seeing is the only way of indicating what I mean when I say “there is a blue qualia”. So to echo J_Thomas_Moros, any story the brain is telling that constitutes my experience of blueness would simply be the qualia itself (not an illusion of one).
I really enjoyed this post. It was fun to read and really drove home the point about starting with examples. I also thought it was helpful that it didn’t just saying, “teach by example”. I feel that simplistic idea is all too common and often leads to bad teaching where example after example is given with no clear definitions or high level explanations. However, this article emphasized how one needs to build on the example to connect it with abstract ideas. This creates a bridge between what we already understand and what we are learning.
As I was thinking about this to write this review, I was trying to think of examples where it makes more sense to explain the abstract thing first and then give examples. I had great difficulty coming up with any examples where abstract first makes sense. The few possible examples I could think of came from pure math, and even there I wonder if it wouldn’t still help to start with examples.
The most abstract subject I’ve ever studied is category theory. Recently I was learning about adjoint functors, and here indeed the abstract definition make sense entirely independent of any examples. However, having learned the definition one can’t really do anything with adjoint functors until one has seen it in some examples. So this might be an example where the abstraction-example-abstraction order or explanation makes sense. On the other hand, once I learned about the free-forgetfull adjunction, I thought that would have been a good example to start with to build intuition before introducing the abstract definition. I realized that my favorite teachers of the subject still use a lot of examples. Like Bartosz Milewski, who comes at category theory from the perspective of a programmer.
Learning to program is also a good example where in advance one might think it would make sense to learn a bunch of abstractions first. However, in practice, one learns to code by example, then after having mastered some examples, learning the principles behind them.
A few of points. First I’ve heard several AI researchers say that GPT-3 is already close to the limit of all high quality human generated text data. While the amount of text on the internet will continue to grow, it might not grow fast enough for major continued improvement. Thus additional media might be necessary for training input.
Second deaf blind people still have multiple senses that allow them to build 3D sensory-motor models of reality (touch, smell, taste, proprioception, vestibular, sound vibrations). Correlations among these senses gives rise to understanding causality. Moreover, human brains might have evolved innate structures for things like causality, agency, objecthood, etc which don’t have to be learned.
Third, as DALL-E illustrates, intelligence is not just about learning knowledge it is also about expressing that learning in a medium. It is hard to see how an AI trained only on text could paint a picture or sing a song.
What happens when OpenAI simply expands this method of token prediction to train with every kind of correlated multi-media on the internet? Audio, video, text, images, semantic web ontologies, and scientific data. If they also increase the buffer size and token complexity, how close does this get us to AGI?
Various fisheries have become so depleted as to no longer be commercially viable. One of the obvious examples is the Canadian Maritime fisheries. Despite advanced warning that overfishing was leading to a collapse in cod populations, they were fished to the point of commercial non-viability, resulting in a regional economic collapse that caused depressed standards of living in the maritime provinces to this day.
Excellent article, thank you. I particularly enjoyed your images and diagrams. To me concept diagrams are another superpower for explaining things. Have you written anything about that?
The term I introduced is “fittingness” not fitness. Fittingness is meant to evoke both fit, as in whether a pair of shoes fit my feet, and also fitting, as in “that is a fitting word choice for this sentence”. It is possible that there is another term which would be a better label for the underlying concept. If you have suggestions for alternatives I would love to hear them.
I think it’s important that the word is specific, not general. As you point out, we could use a general term qualified with a lengthy phrase like: “success with respect to concept formation in the context of rational inquiry,” but that clunker is difficult to sprinkle throughout an argument. The advantage of a single term to encapsulate an important idea should be obvious. Nobody suggests we should replace the term truth with the phrase, “success with respect to belief in the context of rational inquiry.” Moreover, the metaphorical associations of fit and fitting give a clue about what this kind of success actually involves. It involves concepts fitting the structures found in reality, without implying the unsustainable idea that we can know what the natural structures are in advance of inquiry. We can size a shoe without knowing our foot size in advance, just by trying on lots of different shoes until one fits.
I admit that the concept I call fittingness is not often used at present. Indeed I believe in present discourse fittingness is often muddled either with truth or instrumental usefulness. This precise muddle leads to difficulties in understanding how Kuhnian paradigm shifts (or pre-paradigmatic science) can be understood as legitimate expressions of rational inquiry. I didn’t do more than hint at such problems in the post, maybe I’ll write another post about this.
The point of my post is to diffuse these muddles and make it easier to appeal to fittingness on the regular. I want it to be a part of our ready-to-hand conceptual repertoire as rationalists, in the same way that we have easy access to terms like truth, probability, evidence, etc. I make a case for why this would be of benefit in the section titled, “Why is this Distinction Important?” If you don’t find that section convincing please let me know what you see as the specific shortcomings and I will try to address them.
I find it hard to believe your prediction that this breakthrough will be insignificant given what I’ve read in other reputable sources. I give a pretty high initial credence to the scientific claims of publications like Nature which had this to say in their article on Alphafold2:
The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery.
Thanks for your comment. My replies are below.
“so Gisin’s musings… are guaranteed to be not a step in any progress of the understanding of physics.”
What is your epistemic justification for asserting such a guarantee of failure? Of course, any new speculative idea in theoretical physics is far from likely to be adopted as part of the core theory, but you are making a much stronger claim by saying that it will not even be “a step in any progress of the understanding of physics”. Even ideas that are eventually rejected as false, are often useful for developing understanding. Gisin’s papers ask physicists to consider their unexamined assumptions about the nature of math itself, which seems at least like a fruitful path of inquiry, even if it won’t necessarily lead to any major breakthroughs.
“mathematical proofs are as much observations as anything else. Just because they happen in one’s head or with a pencil on paper, they are still observations.”
This reminds me of John Locke’s view that mathematical truths come from observation of internal states. That is an interesting perspective, but I’m not sure it an hold up to scrutiny. The biggest issue with it seems to be that in order to evaluate the evidence provided by empirical observations we must have a rational framework which includes logic and math. If logic and math themselves were simply observational, then we have no framework for evaluating the evidence provided by those observations. Perhaps you can give an alternative account of how we evaluate evidence without pre-supposing a rational framework.
“The difficulty of calculating a far-away digit in the decimal expansion of pi has nothing to do with pi itself: you can perfectly well define it as the ratio of circumference to diameter, or as a limit of some series”
I agree with this statement. I think though it misses the point I was elaborating about Brouwer’s concept of choice sequences. The issue isn’t that we can’t define a sequence that is equivalent to the infinite expansion of pi, I think it is rather that for any real quantity we an never be certain that it will continue to obey the lawlike expansion into the future. So the issue isn’t the “difficulty of calculating a far-away digit” the issue is that no matter how many digits we observe following the law like pattern, the future digits may still deviate from that pattern. No matter how many digits of pi a real number contains, the next digit might suddenly be something other than pi (in which case we would say retrospectively that the real number was never equal to pi in the first place). This is actually what we observe, if we are to say measure the ratio of a jar lid’s diameter to it’s circumference. The first few digits will match pi, but then as we to smaller scales it will deviate.
″...the idea that Einstein’s equations are somehow unique in terms of being timeless is utterly false”
I made no claim that they are unique in this regard.
You might be interested to look at David Corfield’s book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of “thinking in a context”, where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).
I think I can fruitfully engage in truth evaluation of grue things wihtout agreeing or supposing that grue is fitting.
As indicated in the post, fittingness is dependent on the domain D under study. If we take grue to be a term in the study of colour, it is profoundly ill-fitting. I think it is a fair assessment that no researcher who studies colour would find it fruitful or salient to evaluate the truth of propositions involving grue. The picture changes however if we let D be philosophy of science. In that case, grue is fitting, precisely because it illuminates an important paradox in our theories of induction. Here the truth evaluation of statements formulated using grue is fruitful, but that’s not a problem because grue is fitting.
A true counter example to my claim would require that a concept C is ill-fitting for a given domain D and yet it is fruitful (for the purpose of rational inquiry into D) to evaluate the truth of statements which are formulated with C.
Regarding the quantum mechanics example, I would need more details to fully understand your claim. My hunch is that the mathematical concepts used to formulate QM could be fitting for the domain of physics even if we don’t have a good meta-interpretation of them. If you think this isn’t the case, please elaborate on why not.
It seems odd to equate rationality with probabilistic reasoning. Philosophers have always distinguished between demonstrative (i.e., mathematical) reasoning and probabilistic (i.e., empirical) reasoning. To say that rationality is constituted only by the latter form reasoning is very odd, especially considering that it is only though demonstrative knowledge that we can even formulate such things as Bayesian mathematics.
Category theory is a meta-theory of demonstrative knowledge. It helps us understand how concepts relate to each other in a rigorous way. This helps with the theory side of science rather than the observation side of science (although applied category theories are working to build unified formalisms for experiments-as-events and theories).
I think it is accurate to say that, outside of computer science, applied category theory is a very young field (maybe 10-20 years old). It is not surprising that there haven’t been major breakthroughs yet. Historically fruitful applications of discoveries in pure math often take decades or even centuries to develop. The wave equation was discovered in the 1750s in a pure math context, but it wasn’t until the 1860s that Maxwell used it to develop a theory of electromagnetism. Of course, this is not in itself an argument that CT will produce applied breakthroughs. However, we can draw a kind of meta-historical generalization that mathematical theories which are central/profound to pure mathematicians often turn out to be useful in describing the world (Ian Stewart sketches this argument in his Concepts of Modern Mathematics pp 6-7).
CT is one of the key ideas in 20th century algebra/topology/logic which has allowed huge innovation in modern mathematics. What I find interesting in particular about CT is how it allows problems to be translated between universes of discourse. I think a lot of its promise in science may be in a similar vein. Imagine if scientists across different scientific disciplines had a way to use the theoretical insights of other disciplines to attack their problems. We already see this when say economists borrow equations from physics, but CT could enable a more systematic sharing of theoretical apparatus across scientific domains.
For an in depth argument that could taken to support this point, I highly recommend Humankind: A Hopeful History by Rutger Bregman.
it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.
I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by “downstream” you have some other meaning in mind, please clarify. However, I will point out that you can’t simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I’ll leave it for now).
So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences.
If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn’t know anything about geometry because of his ignorance of cognitive algorithms.
But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophisticated empirical science relies on the validity of mathematical inference to establish it’s theories. You can’t use neuroscience to validate statistics when the validity of neuroscientific empirical methods themselves depend on the epistemic bonafides of statistics. With logic the case is even more obvious. An empirical science will rely on the validity of deductive inference in formulating it’s arguments (read any paper in any scientific journal). So there is no chance that the rules of logic will be ultimately justified through empirical research. Note this isn’t the same as saying we can’t know anything without assuming the prior validity of math and logic. We might have lots of basic kinds of knowledge about tables and chairs and such, but we can’t have sophisticated knowledge of the sort gained through rigorous scientific research as this relies essentially on complex reasoning for it’s own justification.
An important caveat to this is that of course we can have fruitful empirical research into our cognitive biases. For example, the famous Wason selection task showed that humans in general are not very reliable at applying the logical rule of modus tollens in an abstract context. However, crucially, in order to reach this finding, Wason (and other researchers) had to assume that they themselves knew the right answer on the task. i.e.., the cognitive science researchers assumed the a priori validity of the deductive inference rule based on their knowledge of formal logic. The same is true for Kahneman and Tversky’s studies of bias in the areas of statistics and probability.
In summary, I am wholeheartedly in favour of using empirical research to inform our epistemology (in the way that the cognitive biases literature does). But there is a big difference between this and the claim that epistemology doesn’t need anything in addition to empirical science. This is simply not true. Mathematics is the clearest example of why this argument fails, but once one has accepted its failure in the case of mathematics, one can start to see how it might fail in other less obvious ways.
An interesting contribution to is this book by Hofstadter and Sanders
They explain thinking in terms of analogy, which as they use the term encompasses metaphor. This book is the a mature cognitive sciencey articulation of many of the fun and loose ideas that Hofstadter first explored in G.E.B.
Agreed. Open AI did a study on the trends of algorithm efficiency. They found a 44x improvement in training efficiency on ImageNet over 7 years.
https://openai.com/blog/ai-and-efficiency/