Informality
In Market Logic (part 1, part 2) I investigated what logic and theory of uncertainty naturally emerges from a Garrabrant-induction-like setup if it isn’t rigged towards classical logic and classical probability theory. However, I only dealt with opaque “market goods” which are not composed of parts. Of course, the derivatives I constructed have structure, but derivatives are the analogue of logically definable things: they only take the meaning of the underlying market goods and “remix” that meaning. As Sam mentioned in Condensation, postulating a latent variable may involve expanding one’s sense of what is; expanding the set of possible worlds, not only defining a new random variable on the same outcome space.
Simply put, I want a theory of how vague, ill-defined, messy concepts relate to clean, logical, well-defined, crisp concepts. Logic is already well-defined, so it doesn’t suit the purpose.[1]
So, let’s suppose that market goods are identified with sequences of symbols, which I’ll call strings. We know the alphabet, but we don’t a priori have words and grammar. We only know these market goods by their names; we don’t a priori know what they refer to.
This is going to be incredibly sketchy, by the way. It’s a speculative idea I want to spend more time working out properly.
So each sequence of symbols is a market good. We want to figure out how to parse the strings into something meaningful. Recall my earlier trick of identifying market trades with inference. How can we analyze patterns in the market trades, to help us understand strings as structured claims?
Well, reasoning on structured claims often involves substitution rules. We’re looking at trades moving money from one string to another as edits. Patterns in these edits across many sentence-pairs indicate substitution rules which the market strongly endorses. We can look for high-wealth traders who enforce given substitution rules, or we can look for influential traders who do the same (IE might be low-wealth but enforce their will on the market effectively, don’t get traded against). We can look at substitution rules which the market endorses in the limit (constraint gets violated less over time). Perhaps there are other ways to look at this as well.
In any case, somehow we’re examining the substitution rules endorsed by the market.
First, there’s equational substitutions, which are bidirectional; synonym relationships.
Then there’s one-directional substitutions. There’s an important nuance here: in logic, there are negative contexts and positive contexts. A positive context is a place in a larger expression where strengthening the term strengthens the whole expression. “Stronger” in logic means more specific, claims more, rules out more worlds. So, for example, “If I left the yard, I could find my way back to the house” is a stronger claim than “If I left the yard, I could find my way back to the yard” since one could in theory find one’s way back to the yard without being able to find the house, but not vice versa. In “If A then B” statements, B is a positive context and A is a negative context. “If I left the yard, I could find my way back to the house” is a weaker claim than “If I left the house, I could find my way back to the house”, because it has the stronger premise.
Negation switches us between positive and negative contexts. “This is not an apple” is a weaker claim than “This is not a fruit”. This example also illustrates that substitution can make sense on noun phrases, not just sub-sentences; noun phrases can be weaker or stronger even though they aren’t claims. Bidirectional substitution subsumes different types of equality, at least (noun equivalence) and (claim equivalence). One-directional substitution subsumes different types as well, at least (set inclusion) and (logical implication). So, similarly, our concept of negation here combines set-compliment with claim negation.
Sometimes, substitution rules are highly context-free. For example, , so anywhere occurs in a mathematical equation or formula, we can substitute while preserving the truth/meaning of the claim/expression.
Other times, substitutions are highly context-dependent. For example, a dollhouse chair is a type of chair, but it isn’t good for sitting in.
A transparent context is one such as mathematical equations/formulas, where substitution rules apply. Such a context is also sometimes called referentially transparent. An opaque context is one where things are context-sensitive, such as natural language; you can’t just apply substitution rules. This concept of transparent context is shared between philosophy of language, philosophy of mind, linguistics, logic, and the study of programming languages. One advantage claimed for functional programming languages is their referential transparency: an expression evaluates exactly the same way, no matter what context it is evaluated in. Languages with side-effects don’t have this property.
So, in our market on strings, we can examine where substitution rules apply to find transparent contexts. I think a transparent context would be characterized as something like:
A method for detecting when we’re in that context. This might itself be very context-sensitive, EG, it requires informal skill to detect when a string of symbols is representing formal math in a transparent way.[2]
A set of substitution rules which are valid for reasoning in that context. This may involve a grammar for parsing expressions in the context, so that we know how to parse into terms that can be substituted.
The same could characterize an opaque context, but the substitution rules for the transparent context would depend only on classifying sub-contexts into “positive” or “negative” contexts.
There’s nothing inherently wrong with an opaque concept; I’m not about to call for us to all abandon natural languages and learn Lojban. Even logic includes non-transparent contexts, such as modal operators. Even functional programming languages have quoted strings (which are an opaque context).
What I do want to claim, perhaps, is that you don’t really understand something unless you can translate it into a transparent-context description.
This is similar to claims such as “you don’t understand something unless you can program it” or “you don’t understand something unless you can write it down mathematically”, but significantly generalized.
Going back to the market on strings, I’m saying we could define some formal metric for how opaque/transparent a string or substring is, but more opaque contexts aren’t inherently meaningless. If the market is confident that a string is equivalent (inter-tradeable) with some highly transparent string, then we might say “It isn’t transparent, but it is interpretable”.
Let’s consider ways this can fail.
There’s the lesser sin, ambiguity. This manifests as multiple partial translations into transparent contexts. (This is itself an ambiguous description; the formal details need to be hashed out.) The more ambiguous, the worse.
(Note that I’m distinguishing this from vagueness, which can be perfectly transparent. Ambiguity creates a situation where we are not sure which substitution rules to apply to a term, because it has several possible meanings. On the other hand, the theory allows concepts to be fundamentally vague, with no ambiguity. I’m not married to this distinction but it does seem to fall out of the math as I’m imagining it.)
There could be a greater sin, where there are no candidate translations into transparent contexts. This seems to me like a deeper sort of meaninglessness.
There could also be other ways that interpretations into a transparent context are better or worse. They could reveal more or less of the structure of the claim.
I could be wrong about this whole thesis. Maybe there can be understanding without any interpretation into a transparent context. For example, if you can “explain like I’m five” then this is often taken to indicate a strong understanding of an idea, even though five-year-olds are not a transparent context. Perhaps any kind of translation of an idea is some evidence for understanding, and the more translating you can do, the better you understand.
Still, it seems to me that there is something special in being able to translate to a transparent context. If somehow I knew that a concept could not be represented in a transparent way, I would take that as significant evidence that it is nonsense, at least. It is tempting to say it is definitive evidence, even.
This seems to have some connections to my idea of objectivity emerging as third-person-perspectives get constructed, creating a shared map which we can translate all our fist-person-perspectives into in order to efficiently share information.
A more extreme version of the hypothesis which one might consider: understanding as mapping all contexts into one transparent context, like a unified coherent world-model.
- ^
You might object that logic can work fine as a meta-theory; that the syntactic operations of the informal ought to be definable precisely in principle, EG by simulating the brain. I agree with this sentiment, but I am here trying to capture the semantics of informality. The problem of semantics, in my view, is the problem of relating syntactic manipulations (the physical processes in the brain, the computations of an artificial neural network) with semantic ones (beliefs, goals, etc). Hence, I can’t assume a nice interpretable syntax like logic from the beginning.
- ^
This is actually rare: if I say
… the idea is similar to how
then I’m probably making some syntactic point, which doesn’t get preserved under substitution by the usual mathematical equivalences. Perhaps the point can be understood in a weaker transparent context, where algebraic manipulations are not valid substitutions, but there are still some valid substitutions?
I want to use some notion of the power of logic/synonymy relation here. We can always have a vacuous synonymy relation—then all valid substitions preserve truth. Something like propositional logic would be more powerful, and quantificational logic more powerful yet.
Consider how much we can do in Peano arithmetic because of the power of the quantifier. We can use induction to deduce many things about divisibility, the solutions of Diophantine equations, and onward to large parts of mathematics.
For a less precise example, consider reasoning only in coordinates, or with the ability to perform a change of coordinates. A change of coordinates gives a synonymy relation, which we may find simplifies an argument significantly, or lets us “really understand” what’s happening. The picture that I have is that of a weaker and a stronger logical system—in the weaker system, we can’t express the notion of a change of coordinates, but in the stronger system we can. In fact, it’s doubtful whether there’s a reasonable language in which we can express many other thoughts, but not changes of coordinates, not even by some “coding trick”. So, we can take this example as a metaphor, or as a premonition of a generalization of the idea struggling to come into existence. I hope it is still able to shed light on what I mean by the power of a synonymy relation.
Anyway, here you want a normative direction pointing from vaguer languages to crisper languages. The crisper languages are better because they have synonymy. As you say “you don’t really understand something unless you can translate it into a transparent-context description”.
I want to construe this as about the power of synonymy relations. Crisp languages are better because we can use them to have interesting sequences of thoughts, because their synonymy does something for us. And so not just any synonymy relation are better—powerful ones are better.
I strongly doubt, as may be clear, the possibility of a “simple” definition that could measure power in terms of e.g. elementary syntactic criteria, but I think that we still have to admit that it’s what we want here.
I have a feeling that something is missing from the Łukasiewicz stuff you’ve been doing because I don’t think that assigning a quantitative degree of truth is all that vagueness really ought to be about. Vague sentences shift their meanings as the context changes. Vague statements can become crisp statements, and accordingly our language can gain more powerful synonymy. As a slogan perhaps, “vagueness wants to become ambiguity”.
Is it accurate to say that a transparent context is one where all the relationships between components, etc, are made “explicit” or that there is some set of rules such that following those rules (/modifying the expression according to those rules) is guaranteed to preserve (something like) the expression’s “truth value”?
Is there a typo here? “are made explicit” perhaps?
That’s correct. More generally (since the concept also applies to noun phrases) guaranteed to preserve its “value” whatever type that may be. This “value” is something like what-it-points-at, semantic reference.