I think there’s a general bias in Western culture arising from the problems of physicalism that gets people to consider realist ontology not worth seriously pursuing.
In physicalism, only the things that are made up of mass are real. Autism for example is not made up of mass, thus is less real from the physicist perspective than a chair. Realist ontology does treat autism as something that can be real.
The more physicalist perspective has as one of the consequences that the DSM doesn’t really ask “What’s the underlying mechanism behind autism and how do we create an ontology that’s true to the underlying mechanism” but what are the shared symptoms that clinicians can observe and how can have clinicians that disagree about underlying mechanism still have a shared term that they can use to communicate and justify their treatments to insurance companies.
Avoiding treating autism as something that’s real with a specific underlying mechanism that you can describe but instead orienting your ontology based on the symptoms does put you into a bad situation to make sense of mental illness.
In the sequences on LessWrong you have a lot about epistomology but little about ontology. Most reader of LessWrong probably don’t know what realist ontology happens to be. If you search LessWrong for Barry Smith you get only four hits. That’s despite Barry Smith being an important philosopher in applied ontology that does affect what AI as deployed in the real world contexts like the current Iran war does.
This neglect of good realist ontology is downstream of physicalism and causes issues in many different cases and I would expect there to be more notation development if it didn’t exist because notation is downstream from having an ontology for which you create the notation.
If you look at chemistry for example, you do have the up/down notation for the four elements resting on the idea that fire and air strive to go up while water and earth go down plus the extra line. In the process of discovering atoms, chemists created notation to represent those. The periodic system is a development of notation for the clear ontology of chemistry. Various different notation were developed in chemistry to be able to represent the molecules. For that you need a clear ontology of the molecules.
Musical notation rests on the concepts like notes being ontologically very clearly defined.
If you have the international phonetic alphabet (IPA), it’s notation rests on an ontology of consonants having a place of articulation (where in the vocal tract the consonant is made Examples: bilabial, dental, alveolar, palatal, velar, glottal) and a manner of articulation (how the airflow is shaped or obstructed Examples: plosive, nasal, trill, fricative, approximant, lateral approximant).
Interestingly, modern text-to-speech systems often internally derivate from the international phonetic alphabet, so that ontology is a bit more arbitrary than the ontology for different kinds of atoms. Different languages also express the same IPA sound slightly differently. Yet, if you for example look at the conlanging community they do use IPA and the ontology on which it is build.
If you take a post-1960 new notation like the notation for mental processes laid out in The Emprint Method by Leslie Cameron-Bandler, Michael Lebeau and David Gordon in 1985, that does come with a ontology backing it.
Leverage Research charting could be seen as another post-1960 notation that comes out of the ontology of Leverage Research for beliefs. I haven’t interacted directly with Leverage around charting, but I would expect that Geoff, would agree with that description.
Even when you could have a scientific discourse build around both notations, both of those notations didn’t really find adoption. As far as I’m aware post-1960 academic psychologists did not come up with something similar as either of those two projects and I do think that their relationship to ontology is a cause of that. Physicalism does seem to me like a good culprit to blame for it. Geoff and Cameron-Bandler aren’t physicalist.
Rephrasing what you are saying to check if I am understanding: Conceptual progress has slowed down, because most research is bottlenecked on ontology. If we made progress on that, we would see more new notations. As an example you bring up the mental disorders, where people are more concerned about the politics of diagnosis, than understanding the underlying reason why “autism” is a thing (or how many distinct things are behind that label). I feel like the sequences actually are pretty great for the ontology stuff? Or at least I can’t think of anything better I’ve read. Like noticing confusion is a great skill with no skill ceiling in sight. The sequence on words taught me a bunch about semantics that seems like it is important for that type of stuff. I am pretty curious what is up with this Barry Smith guy now though, so if you have some specific reading recommendations from him and maybe you can even pitch what particular skill with regards to ontology you feel better at after reading him that would be great.
Can you elaborate?
In physicalism, only the things that are made up of mass are real. Autism for example is not made up of mass, thus is less real from the physicist perspective than a chair. Realist ontology does treat autism as something that can be real.
The more physicalist perspective has as one of the consequences that the DSM doesn’t really ask “What’s the underlying mechanism behind autism and how do we create an ontology that’s true to the underlying mechanism” but what are the shared symptoms that clinicians can observe and how can have clinicians that disagree about underlying mechanism still have a shared term that they can use to communicate and justify their treatments to insurance companies.
Avoiding treating autism as something that’s real with a specific underlying mechanism that you can describe but instead orienting your ontology based on the symptoms does put you into a bad situation to make sense of mental illness.
In the sequences on LessWrong you have a lot about epistomology but little about ontology. Most reader of LessWrong probably don’t know what realist ontology happens to be. If you search LessWrong for Barry Smith you get only four hits. That’s despite Barry Smith being an important philosopher in applied ontology that does affect what AI as deployed in the real world contexts like the current Iran war does.
This neglect of good realist ontology is downstream of physicalism and causes issues in many different cases and I would expect there to be more notation development if it didn’t exist because notation is downstream from having an ontology for which you create the notation.
Thanks for the elaboration. Do you have historical examples of new ontology unlocking new notation?
If you look at chemistry for example, you do have the up/down notation for the four elements resting on the idea that fire and air strive to go up while water and earth go down plus the extra line. In the process of discovering atoms, chemists created notation to represent those. The periodic system is a development of notation for the clear ontology of chemistry. Various different notation were developed in chemistry to be able to represent the molecules. For that you need a clear ontology of the molecules.
Musical notation rests on the concepts like notes being ontologically very clearly defined.
If you have the international phonetic alphabet (IPA), it’s notation rests on an ontology of consonants having a place of articulation (where in the vocal tract the consonant is made Examples: bilabial, dental, alveolar, palatal, velar, glottal) and a manner of articulation (how the airflow is shaped or obstructed Examples: plosive, nasal, trill, fricative, approximant, lateral approximant).
Interestingly, modern text-to-speech systems often internally derivate from the international phonetic alphabet, so that ontology is a bit more arbitrary than the ontology for different kinds of atoms. Different languages also express the same IPA sound slightly differently. Yet, if you for example look at the conlanging community they do use IPA and the ontology on which it is build.
If you take a post-1960 new notation like the notation for mental processes laid out in The Emprint Method by Leslie Cameron-Bandler, Michael Lebeau and David Gordon in 1985, that does come with a ontology backing it.
Leverage Research charting could be seen as another post-1960 notation that comes out of the ontology of Leverage Research for beliefs. I haven’t interacted directly with Leverage around charting, but I would expect that Geoff, would agree with that description.
Even when you could have a scientific discourse build around both notations, both of those notations didn’t really find adoption. As far as I’m aware post-1960 academic psychologists did not come up with something similar as either of those two projects and I do think that their relationship to ontology is a cause of that. Physicalism does seem to me like a good culprit to blame for it. Geoff and Cameron-Bandler aren’t physicalist.
Rephrasing what you are saying to check if I am understanding: Conceptual progress has slowed down, because most research is bottlenecked on ontology. If we made progress on that, we would see more new notations. As an example you bring up the mental disorders, where people are more concerned about the politics of diagnosis, than understanding the underlying reason why “autism” is a thing (or how many distinct things are behind that label). I feel like the sequences actually are pretty great for the ontology stuff? Or at least I can’t think of anything better I’ve read. Like noticing confusion is a great skill with no skill ceiling in sight. The sequence on words taught me a bunch about semantics that seems like it is important for that type of stuff. I am pretty curious what is up with this Barry Smith guy now though, so if you have some specific reading recommendations from him and maybe you can even pitch what particular skill with regards to ontology you feel better at after reading him that would be great.