The problem of finding a good representation of abstract thoughts
As background, here’s a simple toy model of thinking:
The goal is to find a good representation of the formal statements (and also the background knowledge) in the diagram.
The visual angle is sorta difficult, so the two easy criteria for figuring out what a good representation is, are: 1. Correspondance to language sentences 2. Well suited to do logical/probabilistic inference
The second criterion is often neglected. People in semantics often just take language sentences and see how they can write it so it looks like formal logic, without taking care that it’s well suited for doing logical/probabilistic inference, let alone specifying the surrounding knowledge that’s required for doing inference.
In my post “Introduction to Representing Sentences as Logical Statements”, I proposed that standard ways of formalizing events like Davidsonian event semantics are bad and that instead we just want to use temporally bounded facts. Here’s a clarification on according to which criterion my version is perhaps better[1]:
Davidsonian semantics (among other things) allows you to conveniently make it look like you explained how to formalize adverbials (“quickly”, “loudly”, “carefully”) by e.g. formalizing the sentence “Alice quickly went home” as:
This is a bug, not a feature. It gives you the illusion that you made progress on understanding language, but actually you only make progress if you’re explaining how a system can make useful inferences (or how a sentence can update a visual scene).
A more precise version of one of the claims from my post is basically that my temporally-bounded-facts way of treating events is closer to the deep formal representation that can be used for logical/probabilistic inference.
You can use the Davidsonian representation, but for actually explaining part of the meaning you need to add a lot of background knowledge for making inferences to other statements, and once you added background rules which I claim are basically like parsing rules to a deeper representation that uses only temporally bounded facts.
Tbc, the way I represent statements in my post is still not nearly sufficiently close to how our minds might actually track abstract information: Our minds make a lot more precise distinctions and have deeper probabilistic error-tolerant representations. Language sentences are only fuzzy shadows of our true underlying thoughts, and our minds infer a lot from context about what precisely is meant. The problem of parsing sentences into an actually good formal representation obviously becomes correspondingly harder.
For some reasons why it’s better, maybe see the “Events as facts” section in my post, though it’s not explained well. Though maybe it’s sorta intuitive given the clarified context.
The problem of finding a good representation of abstract thoughts
As background, here’s a simple toy model of thinking:
The goal is to find a good representation of the formal statements (and also the background knowledge) in the diagram.
The visual angle is sorta difficult, so the two easy criteria for figuring out what a good representation is, are:
1. Correspondance to language sentences
2. Well suited to do logical/probabilistic inference
The second criterion is often neglected. People in semantics often just take language sentences and see how they can write it so it looks like formal logic, without taking care that it’s well suited for doing logical/probabilistic inference, let alone specifying the surrounding knowledge that’s required for doing inference.
In my post “Introduction to Representing Sentences as Logical Statements”, I proposed that standard ways of formalizing events like Davidsonian event semantics are bad and that instead we just want to use temporally bounded facts. Here’s a clarification on according to which criterion my version is perhaps better[1]:
Davidsonian semantics (among other things) allows you to conveniently make it look like you explained how to formalize adverbials (“quickly”, “loudly”, “carefully”) by e.g. formalizing the sentence “Alice quickly went home” as:
∃e(Going(e) ∧ Agent(e, Alice) ∧ Goal(e, Home) ∧ Quick(e) ∧ Past(e))
This is a bug, not a feature. It gives you the illusion that you made progress on understanding language, but actually you only make progress if you’re explaining how a system can make useful inferences (or how a sentence can update a visual scene).
A more precise version of one of the claims from my post is basically that my temporally-bounded-facts way of treating events is closer to the deep formal representation that can be used for logical/probabilistic inference.
You can use the Davidsonian representation, but for actually explaining part of the meaning you need to add a lot of background knowledge for making inferences to other statements, and once you added background rules which I claim are basically like parsing rules to a deeper representation that uses only temporally bounded facts.
Tbc, the way I represent statements in my post is still not nearly sufficiently close to how our minds might actually track abstract information: Our minds make a lot more precise distinctions and have deeper probabilistic error-tolerant representations. Language sentences are only fuzzy shadows of our true underlying thoughts, and our minds infer a lot from context about what precisely is meant. The problem of parsing sentences into an actually good formal representation obviously becomes correspondingly harder.
For some reasons why it’s better, maybe see the “Events as facts” section in my post, though it’s not explained well. Though maybe it’s sorta intuitive given the clarified context.