The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review

Link post

About 15 years ago, I read Malcolm Gladwell’s Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan’s theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is “The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory”.

CTMU has a high-IQ mystique about it: if you don’t get it, maybe it’s because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially because the author is outside the normal academic system), having few citations relative to most academic works. The work is incredibly ambitious, attempting to rebase philosophical metaphysics on a new unified foundation. As a short work, it can’t fully deliver on this ambition; it can provide a “seed” of a philosophical research program aimed at understanding the world, but few implications are drawn out.

In reading the work, there is a repeated sense of “what?”, staring and looking at terms, and then “ohhh” as something clicks. These insights may actually be the main value of the work; at the end I still don’t quite see how everything fits together in a coherent system, but there were a lot of clicks along the way nonetheless.

Many of the ideas are similar to other intellectual ideas such as “anthropics” and “acausal interaction”, but with less apparent mathematical precision, such that it’s harder to see exactly what is being said, and easier to round off to something imprecise and implausible.

There is repeated discussion of “intelligent design”, and Langan claims that CTMU proves the existence of God (albeit with a very different conceptualization than traditional religions). From the perspective of someone who witnessed the evolution /​ intelligent design debate of the 90s-00s, siding with the “intelligent design” branch seems erroneous, although the version presented here differs quite a lot from more standard intelligent design argumentation. On the other hand, the “evolutionists” have gone on to develop complex and underspecified theories of anthropics, multiverses, and simulations, which bring some amount of fundamental or nearly-fundamental mind and agency back into the picture.

I didn’t finish summarizing and reviewing the full work, but what I have written might be useful to some people. Note that this is a very long post.


Perception is a kind of model of reality. Information about reality includes information about the information processor (“one’s self”), which is called reflexivity. The theory identifies mental and physical reality, in common with idealism. CTMU is described as a “supertautological reality-theoretic extension of logic”; logic deals in tautologies, and CTMU somehow deals in meta-tautologies. It is based in part on computational language theory (e.g. the work of Chomsky, and type theory). Central to CTMU is the Self-Configuring Self-Processing Language (SCSPL), a language that can reflect on itself and configure its own execution, perhaps analogous to a self-modifying program. SCSPL encodes a form of “dual-aspect monism” consisting of “infocognition”, integrated information and cognition. CTMU states that the universe comes from “unbounded telesis” (UBT), a “primordial realm of infocognitive potential free of informational constraint”; this may be similar to a language in which the physical universe could be “specified”, or perhaps even prior to a language. CTMU features “telic recursion” involving agent-like “telors” that are “maximizing a generalized self-selection parameter”, in an anthropic way that is like increasing their own anthropic probability, or “measure”, in a way that generalizes evolutionary self-reproduction. It includes interpretations of physical phenomena such as quantum mechanics (“conspansion”) and “temporal directionality and accelerating cosmic expansion”. It also includes an interpretation of “intelligent design” as it conceives of agent-like entities creating themselves and each other in a recursive process.


The introduction notes: “Among the most exciting recent developments in science are Complexity Theory, the theory of self-organizing systems, and the modern incarnation of Intelligent Design Theory, which investigates the deep relationship between self-organization and evolutionary biology in a scientific context not preemptively closed to teleological causation.”

Complexity theory, in contrast to traditional physical reductionism, gives rise to “informational reductionism”, which is foundational on information rather than, say, atoms. However, this reductionism has similar problems to other reductionisms. Separating information and matter, Langan claims, recapitulates Cartesian dualism; therefore, CTMU seeks to unify these, developing “a conceptual framework in which the relationship between mind and matter, cognition and information, is made explicit.”

DNA, although a form of information, is also embedded in matter, and would not have material effects without being read by a material “transducer” (e.g. a ribosome). Reducing everything to information, therefore, neglects the material embodiment of information processors.

Intelligent design theory involves probabilistic judgments such as “irreducible complexity”, the idea that life is too complex and well-organized to have been produced randomly by undirected evolution. Such probabilistic judgments rely on either a causal model (e.g. a model of how evolution would work and what structures it could create), or some global model that yields probabilities more directly.

Such a global model would have certain properties: “it must be rationally derivable from a priori principles and essentially tautological in nature, it must on some level identify matter and information, and it must eliminate the explanatory gap between the mental and physical aspects of reality. Furthermore, in keeping with the name of that to be modeled, it must meaningfully incorporate the intelligence and design concepts, describing the universe as an intelligently self-designed, self-organizing system.”

Creating such a model would be an ambitious project. Langan summarizes his solution: “How is this to be done? In a word, with language.”

This recalls Biblical verses on the relation of God to language. John 1:1 states: “In the beginning was the Word, and the Word was with God, and the Word was God.” (NIV). Theologian David Bentley Hart alternatively translates this as: “In the origin there was the Logos, and the Logos was present with God, and the Logos was god.” The Greek term “logos” means “word” and “speech” but also “reason”, “account”, and “discourse”.

Derrida’s frequently-misunderstood “there is nothing outside the text” may have a similar meaning.

Langan continues: “Not only is every formal or working theory of science and mathematics by definition a language, but science and mathematics in whole and in sum are languages.”

Formal logic as a language is a standard mathematical view. Semi-formal mathematics is more like a natural language than a formal language, being a method of communication between mathematicians that assures them of formal correctness. All mathematical discourse is linguistic but not vice versa; mathematics lacks the ability to refer to what is ill-defined, or what is empirical but indiscernible.

Science expands mathematics to refer to more of empirical reality, models of it, and elements of such. Like mathematics, science is a language of precision, excluding from the discourse sufficiently ill-defined or ambiguous concepts; this makes science unlike poetry.

Perhaps the empirical phenomena predicted by scientific discourse are not themselves language? Langan disagrees: “Even cognition and perception are languages based on what Kant might have called ‘phenomenal syntax’”.

Kant, famously, wrote that all empirical phenomena must appear in spacetime. This provides a type constraint on empirical phenomena, as in type theory. Finite spacetime phenomena, such as images and videos, are relatively easy to formalize in type theory. In a type theoretic language such as Agda, the type of 100 x 100 black-and-white bitmaps may be written as “Vector (Vector Bool 100) 100”, where “Bool” is the type of Booleans (true/​false), and “Vector A n” is a list of n elements each of type A.

AI algorithms process inputs that are formatted according to the algorithm; for example, a convolutional neural network processes a rectangular array. So, the concept of a formal language applies to what we might think of as the “raw sense data” of a cognitive algorithm, and also to intermediate representations used by such algorithms (such as intermediate layers in a convolutional neural network).

Langas conceptualizes the laws of nature as “distributed instructions” applying to multiple natural objects at once (e.g. gravity), and together as a “‘control language’ through which nature regulates its self-instantiations”. This recalls CPU or GPU concepts, such as instructions that are run many times in a loop across different pieces of data, or programs or circuitry replicated across multiple computing units.

The nature of these laws is unclear, for example it is unclear “where” (if anywhere) they are located. There is an inherent difficulty of asking this question similar to asking, as an intelligent program running on a CPU, where the circuits of a CPU are; it is a wrong question to ask which memory register the circuits are located in, and analogously it may be a wrong question for us to ask at what physical coordinates the law of gravity is, although the CPU case shows that the “where” question may nonetheless have some answer.

Langan seeks to extend the empirical-scientific methods of physics and cosmology to answer questions that they cannot: “science and philosophy do not progress by regarding their past investments as ends in themselves; the object is always to preserve that which is valuable in the old methods while adjoining new methods that refine their meaning and extend their horizons.”

In the process, his approach “leaves the current picture of reality virtually intact”, but creates a “logical mirror image” or “conspansive dual” to the picture to create a more complete unified view (here, “conspansion” refers to a process of reality evolving that can be alternately viewed, dually, as space expanding or matter contracting).

On Theories, Models and False Dichotomies

Langan describes “reality theory”: “In the search for ever deeper and broader explanations, science has reached the point at which it can no longer deny the existence of intractable conceptual difficulties devolving to the explanatory inadequacies of its fundamental conceptual models of reality. This has spawned a new discipline known as reality theory, the study of the nature of reality in its broadest sense… Mainstream reality theory counts among its hotter foci the interpretation of quantum theory and its reconciliation with classical physics, the study of subjective consciousness and its relationship to objective material reality, the reconciliation of science and mathematics, complexity theory, cosmology, and related branches of science, mathematics, philosophy and theology.”

Common discourse uses the concept of “real” often to distinguish conceptions by whether they have “actual” referents or not, but it is not totally clear how to define “real” or how such a concept relates to scientific theories or its elements. Reality theory includes a theory of its application: since reality theory seeks to describe the “real” and is in some sense itself “real”, it must describe how it relates to the reality it describes.

Over time, continuum physics has “lost ground” to discrete computational/​informational physics, in part due to the increased role of computer simulations in the study of physics, and in part due to quantum mechanics. Langan claims that, although discrete models have large advantages, they have problems with “scaling”, “nonlocality” (perhaps referring to how discrete models allow elements (e.g. particles) at nonzero distance from each other to directly influence each other), lack of ability to describe the “medium, device, or array in which they evolve”, the “initial states”, and “state-transition programming”.

I am not totally sure why he considers discrete models to be unable to describe initial states or state-transition programming. Typically, such states and state transitions are described by discrete computer or mathematical specifications/​instructions. A discrete physical model, such as Conway’s game of life, must specify the initial state and state transitions, which are themselves not found within the evolving list of states (in this case, binary-valued grids); however, this is also the case for continuous models.

Langan also claims that discrete physical models are “anchored in materialism, objectivism, and Cartesian dualism”; such models typically model the universe from “outside” (a “view from nowhere”) while leaving unclear the mapping between such a perspective and the perspectives of agents within the system, leading to anthropic paradoxes.

Langan notes that each of classical and informational reality, while well-defined, lacks the ability to “account for its own genesis”. CTMU seeks to synthesize classical with quantum models, attaining the best of both worlds.

Determinacy, Indeterminacy and the Third Option

Both classical and computational models have a mix of causality and stochasticity: a fully deterministic model would fail to account for phenomena that seem “fundamentally unexplainable” such as quantum noise. While causality and stochasticity seem to exhaust all possible explanations for empirical phenomena, Langan suggests self-determinacy as a third alternative, in which “a system determines its own composition, properties, and evolution independently of external laws and structures”.

This suggests cyclic time as a possible analogy, or anthropics, in which the conditions of mental representation themselves determine the empirical, material circumstances such minds find themselves in.

Langan notes cybernetic feedback (in which various entities regulate each other with positive and negative feedback, finding an equilibrium) as a possible analogy. However, he rejects this, since cybernetic feedback between entities “is meaningless where such entities do not already exist”. Accordingly, “the feedback is ontological in nature and therefore more than cybernetic.”

Ontological feedback is a rather confusing concept. One visualization is to imagine a map that represents a world, and itself as a part of a world with a plausible origin; whenever such a map fails to find a plausible origin of itself, it (and the world it describes) fails to exist. This is in some ways similar to anthropic self-selection.

Ontological feedback is “cyclical or recursive”, but while ordinary recursion (e.g. in a recursive algorithm) runs on informational components that already exist, ontological feedback deals with components that do not yet exist; therefore, a new type of feedback is required, “telic feedback”.

Langan writes: “The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement”. Generalized utility may be compared to “anthropic measure” or “evolutionary measure”, but it isn’t exactly the same. Since some systems exist and others don’t, a “currency” is appropriate, like probability is a currency for anticipations.

Unlike probabilities over universe trajectories, telic feedback doesn’t match an acyclic time ordering: “In effect, the system brings itself into existence as a means of atemporal communication between its past and future”. There may be some relation to Newcomblike scenarios here, in which one’s existence (e.g. ability to sustain one’s self using money) depends on acausal coordination across space and time.

Unlike with Newcomblike scenarios and ordinary probability theory, telic feedback deals with feedback over syntax, not just state. The language the state is expressed in, not merely the state itself, depends on this feedback. Natural languages work somewhat like this, in that the syntax of the English language depends on the state trajectory of historical evolution of language and culture over time; we end up describing this historical state trajectory using a language that is in large part a product of it.

Similarly, the syntax of our cognitive representations (e.g. in the visual cortex) depends on the state trajectories of evolution, a process that is itself described using our cognitive representations.

Even the formal languages of mathematics and computer science depend on a historical process of language and engineering; while it is tempting to say that Turing’s theory of computation is “a priori”, it cannot be fully a priori while being Turing’s. Hence, Langan describes telic feedback as “protocomputational” rather than “computational”, as a computational theory would assume as given syntax for describing computations.

The closest model to “telic feedback” I have encountered in the literature is Robin Hanson’s “Uncommon Priors Require Origin Disputes”, which argues that different agents must share priors as long as they have compatible beliefs about how each agent originated. The similarity is imagining that different agents create representations of the world that explain their own and others’ origins (e.g. explaining humans as evolved), and these representations come together into some shared representation (which in Hanson’s formulation is a shared belief state, and in Langan’s is the universe itself), with agents being more “likely” or “existent” the more plausible their origin stories are (e.g. Hanson might appeal to approximately Bayesian beliefs being helpful for survival).

My own analogy for the process is to some travelers from different places finding themselves at a common location, having little language in common, forming a pidgin, having origin stories about themselves, with their stories about their origins rejected if too implausible, such that there’s optimization for making their type of person seem plausible in the language.

The Future of Reality Theory According to John Wheeler

John Wheeler is a famous physicist who coined the term “black hole”. Langan discusses Wheeler’s views on philosophy of science in part because of the similarity of their views.

In Beyond the Black Hole, Wheeler describes the universe as a “self-excited circuit”, analogized to a diagram of a U with an eye on the left branch of the U looking at the right branch of the U, representing how subject is contiguous with object. Viewing the universe as a self-excited circuit requires cognizing perception as part of the self-recognition of reality, and physical matter as informational and therefore isomorphic to perception.

Wheeler describes the universe as “participancy”: the participation of observers is part of the dynamics of the universe. A participatory universe is “a ‘logic loop’ in which ‘physics gives rise to observer participancy; observer-participancy gives rise to information; and information gives rise to physics.’”

The “participatory principle” is similar to the anthropic principle but stronger: it is impossible to imagine a universe without observers (perhaps because even the vantage point from which the universe is imagined from is a type of observer). According to the participatory principle, “no elementary phenomenon is a phenomenon until it is an observed (or registered) phenomenon”, generalizing from quantum mechanics’ handling of classical states.

Wheeler considers the question of where physical laws come from (“law without law”) to be similar to the question of how order comes from disorder, with evolution as an example. Evolution relies on an orderly physics in which the organisms exist, and there is an open question of whether the physical laws themselves have undergone a process analogous to evolution that may yield orderly physics from a less orderly process.

Wheeler also considers explaining how the macroscopic universe we perceive comes from the low-level information processing of quantum mechanics (“It from bit”). Such an explanation must explain “How come existence?”: why do we see continuous time and space given a lack of physically fundamental time and space? It must also explain “How come the quantum?”, how does quantum mechanics relate to the world we see? And finally, it must explain “How come the ‘one world’ out of many observer-participants”: why do different agents find themselves in “the same” world rather than solipsistic bubbles?

The “It from bit” explanation must, Wheeler says, avoid three errors: “no tower of turtles” (infinite regresses which ultimately fail to explain), “no laws” (lack of pre-existing physical laws governing continuum dynamics), “no continuum” (no fundamental infinitely divisible continuum, given lack of mathematical or physical support for such an infinity), “no space or time” (space and time lack fundamental existence: “Wheeler quotes Einstein in a Kantian vein: ‘Time and space are modes by which we think, and not conditions in which we live’”).

Wheeler suggests some principles for constructing a satisfactory explanation. The first is that “The boundary of a boundary is zero”: this is an algebraic topology theorem showing that, when taking a 3d shape, and then taking its 2d boundary, the boundary of the 2d boundary is nothing, when constructing the boundaries in a consistent fashion that produces cancellation; this may somehow be a metaphor for ex nihilo creation (but I’m not sure how).

The second is “No question? No answer”, the idea that un-asked questions don’t in the general case have answers, e.g. in quantum measurement the measurement being made (“question”) changes future answers, so there is no definite state prior to measurement. This principle implies a significant degree of ontological entanglement between observers and what they observe.

The third is “The Super-Copernican Principle”, stating that no place or time (“now”) is special; our universe is generated by both past and future. It is rather uncontroversial that past observers affect present phenomena; what is rather more controversial is the idea that this isn’t enough, and the present is also influenced by future observers, in a pseudo-retrocausal manner. This doesn’t imply literal time travel of the sort that could imply contradictions, but is perhaps more of an anthropic phenomenon: the observations that affect the future “exist more” in some sense; observations are simultaneously summaries of the past and memories interpreted by the future. Sometimes I think my observations are more likely to come from “important” agents that influence the future (i.e. I think I’m more important than a random person), which, confusingly, indicates some influence of future observers on present measure.

The fourth is “Consciousness”, stating that it’s hard to find a line between what is conscious and unconscious; the word “who” archetypally refers to humans, so overusing the concept indicates anthropocentrism.

The fifth is “More is different”: there are more properties of a system that is larger, due to combinatorial explosion. Quantitative differences produce qualitative ones, including a transition to “multi-level logical structures” (such as organisms and computers) at a certain level of complexity.

Langan notes: “Virtually everybody seems to acknowledge the correctness of Wheeler’s insights, but the higher-order relationships required to put it all together in one big picture have proven elusive.” His CTMU attempts to hit all desiderata.

Some Additional Principles

According to Langan, Descartes argued that reality is mental (rationalism), but went on to assert mind-body dualism, which is contradictory (I don’t know enough about Descartes to evaluate this statement). Berkeley, an empiricist, said reality is perceptual, an intersection of mind and matter; if perception is taken out of one’s conception of reality, what is left is pure subjective cognition. Langan compares eliminativism, an attempt to subtract cognition from reality, to “trying to show that a sponge is not inherently wet while immersing it in water”: as cognitive entities, we can’t succeed in eliminating cognition from our views of reality. (I basically agree with this.)

Hume claims causation is a cognitive artifact, substituting the “problem of induction”. Langan comments that “the problem of induction merely implies that a global theory of reality can only be established by the rational methods of mathematics, specifically including those of logic.” This seems like it may be a misread of Hume given that Hume argued that deductive reasoning was insufficient for deriving a global theory of reality (including causal judgments).

Kant asserted that “unprimed cognition” exists prior to particular contexts (e.g. including time, space, and causation), but also asserts the existence of a disconnected “noumenal” realm, which Langan argues is irrelevant and can be eliminated.

Scientists interpret their observations, but their interpretations are often ad hoc and lack justification. For example, it is unclear how scientists come up with their hypotheses, although cognitive science includes some study of this question, e.g. Bayesian brain theory. Langan investigates the question of attribution of meaning to scientific theories by walking through a sequence of more powerful logics.

“Sentential logic” is propositional calculus; it reasons about truth values of various sentences. Propositional tautologies can be composed, e.g. X & Y is tautological if X and Y are. Predicate logic extends propositional logic to be able to talk about a possibly-infinite set of objects using quantifiers, assigning properties to these objects using predicates. Model theory further extends predicate logic by introducing universes consistent with axiom schemas and allowing reasoning about them.

Langan claims reality theory must emulate 4 properties of sentential logic: absolute truth (truth by definition, as propositional calculus defines truth), closure (the logic being “wholly based on, and defined strictly within the bounds of, cognition and perception”), comprehensiveness (that the logic “applies to everything that can be coherently perceived or conceived”), and consistency (“designed in a way that precludes inconsistency”).

While logic deals in what is true or false, reality theory deals in what is real or unreal (perhaps similar to the epistemology/​ontology distinction). It must “describe reality on a level that justifies science, and thus occupies a deeper level of explanation than science itself”; it must even justify “mathematics along with science”, thus being “metamathematical”. To do this, it must relate theory and universe under a “dual aspect monism”, i.e. it must consider theory and universe to be aspects of a unified reality.

Logicians, computer scientists, and philosophers of science are familiar with cases where truth is ambiguous: logical undecidability (e.g. Gödel’s incompleteness theorem), NP completeness (computational infeasibility of finding solutions to checkable problems), Lowenheim-Skolem (ambiguity of cardinalities of models in model theory), Duhem-Quine (impossibility of testing scientific theories in isolation due to dependence on background assumptions). Langan claims these happen because the truth predicate comes apart from “attributive mappings” that would assign meaning to these predicates. He also claims that falsificationist philosophy of science “demotes truth to provisional status”, in contrast to tautological reasoning in logic. (On the other hand, it seems unclear to me how to get any of science from tautologies, given the empirical nature of science.)

Langan desires to create an “extension” to tautological logic to discuss physical concepts such as space, time, and law. He notes a close relationship between logic, cognition, and perception: for example, “X | !X” when applied to perception states that something and its absence can’t both be perceived at once (note that “X | !X” is equivalent to ”!(X & !X)” in classical but not intuitionistic logic).

Sentential logic, however, is incomplete on its own, since it needs a logician to interpret it. Nature, on the other hand, interprets itself, having “self-processing capability”. Accordingly, reality theory should include a mental component in its logic, allowing the logic to process itself, as if by an external mind but instead by itself.

Langan states his main angle of attack on the problem: “the way to build a theory of reality is to identify the properties that it must unconditionally possess in order to exist, and then bring the theory into existence by defining it to possess these properties without introducing merely contingent properties that, if taken as general, could impair its descriptive relationship with the real universe (those can come later and will naturally be subject to empirical confirmation).”

These properties will include the “3 C’s”: “comprehensiveness (less thorough but also less undecidable than completeness), closure, and consistency”. These will correspond to three principles, the “3 M’s”: “M=R, MAP and MU, respectively standing for the Mind Equals Reality Principle, the Metaphysical Autology Principle, and the Multiplex Unity Principle.” Briefly, M=R “dissolves the distinction between theory and universe… [making] the syntax of this theory comprehensive”, MAP “tautologically renders this syntax closed or self-contained”, and MU “tautologically renders this syntax, and the theory-universe complex it describes, coherent enough to ensure its own consistency”.

CTMU’s definitions of concepts are unavoidably recursive, perhaps similarly to mutually recursive definitions in mathematics or programming. Langan claims: “Most theories begin with axioms, hypotheses and rules of inference, extract implications, logically or empirically test these implications, and then add or revise axioms, theorems or hypotheses. The CTMU does the opposite, stripping away assumptions and ‘rebuilding reality’ while adding no assumptions back.” This reminds of Kant’s project of stripping away and rebuilding metaphysics from a foundation of what must be the case a priori.

The Reality Principle

Reality contains all and only that which is real, if something else influenced reality, it would be part of reality. As a definition this is circular: if we already accept the reality of a single thing, then reality of other things can be derived from their influence on that thing. The circularity invites some amount of ontological dispute over which foundational things can be most readily accepted as real. Langan considers an alternative definition: “Reality is the perceptual aggregate including (1) all scientific observations that ever were and ever will be, and (2) the entire abstract and/​or cognitive explanatory infrastructure of perception”. This definition seems to lean idealist in defining reality as a perceptual aggregate, expanding from scientific observation in the direction of mind rather than matter.


Langan writes: “Syndiffeonesis implies that any assertion to the effect that two things are different implies that they are reductively the same; if their difference is real, then they both reduce to a common reality and are to that extent similar. Syndiffeonesis, the most general of all reductive principles, forms the basis of a new view of the relational structure of reality.”

As an example, consider apples and oranges. They’re different, but what lets us know that they are different? Since they are both plants, they have DNA that can be compared to show that they are different. Also, since they both have a shape, their shapes can be compared and found to be different. Since they both have a taste, they can be tasted to tell that they are different. Each of these comparisons showing difference requires apples and oranges to have something in common, demonstrating syndiffeonesis.

This principle can be seen in type theory; generally, to compare two terms for equality, the terms must have the same type, e.g. 5 and 6 can be found to be unequal since they are both natural numbers.

The commonality is in medium and syntax: “The concept of syndiffeonesis can be captured by asserting that the expression and/​or existence of any difference relation entails a common medium and syntax, i.e. the rules of state and transformation characterizing the medium.” Syntax can be seen in type theory, since terms that can be compared for equality are both written in the same type theoretic syntax. Medium is less straightforward; perhaps apples and oranges both existing in spacetime in the same universe would be an example of a common medium.

Langan claims: “Every syndiffeonic relation has synetic and diffeonic phases respectively exhibiting synesis and diffeonesis (sameness and difference, or distributivity and parametric locality), and displays two forms of containment, topological and descriptive.” The common medium and syntax goes with the synetic phase, while the difference relation goes with the diffeonic phase. One can imagine comparing two things by finding the smallest common “supertype” of both (e.g. fruit for apples/​oranges); in this case the specification from “something” to “a fruit” is synetic (in common between apples and oranges, specifying a common medium and syntax), and the specification from “a fruit” to “apple” or “orange” is diffeonic (showing that they are different fruits).

If two things aren’t expressed in the same syntax, then the fact that their syntaxes are different itself is a diffeonic relation indicating an underlying, more base-level common syntax and medium. For example, while Python programs are syntactically different from Forth programs, they are both expressed as text files. Python files and apples have even less similar syntax, but both exist in physical space, and can be displayed visually. Langan adds: “Any such syndiffeonic regress must terminate, for if it did not, there would be no stable syntax and therefore no ‘relation’ stable enough to be perceived or conceived.”

Langan uses the notation “X ∽ Y” to indicate the common medium shared by X and Y, with smallest common supertypes possibly being an example. If X and Y are different laws (e.g. physical), then X ~ Y denotes a common set of laws that both X and Y are expressed in; for example, many different physical laws can be expressed as instances of energy conservation.

By using the ∽ operator to iteratively find a common medium for all possible perceptible and cognizable things, the universal base medium and syntax of reality is found. This is perhaps similar to a generative grammar of concepts, and is elaborated on in the SCSPL section.

The Principle of Linguistic Reducibility

Following from the discussion of a common medium of reality, Langan writes: “Reality is a self-contained form of language”. It has representations of object-like individuals, space-like relations and attributes, and time-like functions and operators. Our theories of physics have these; physics is a language that can express many different specific physical theories.

Langan argues: “because perception and cognition are languages, and reality is cognitive and perceptual in nature, reality is a language as well.” In typical moments, a person is aware of entities which are related and/​or have functions applied to them, which could be analogized to language processing. Langan also adds, “whereof that which cannot be linguistically described, one cannot perceive or conceive”, following Wittgenstein’s “whereof one cannot speak, one must be silent”.

Theories of everything attempt to reduce everything to a language. They point to objective matter, but such “pointing” is itself something contained in the whole, sharing structure with the theory; for example, a theory of mass may involve procedures for measuring mass, which tie the theory of mass with the objective subject matter. Such relation between theory and reality suggests the syndiffeonic relation “Language ∽ Reality”.

The term “reality” can be analyzed as a linguistic construct: in what cases do words like “real” or “reality” show up, and when are these valid? Sometimes “real” shows up to indicate an inadequacy of a conception, e.g. inability to explain some empirical phenomenon, which is considered “real” unlike the predictions of the wrong theory.

Langan is optimistic about understanding reality linguistically. If we understand reality as a linguistic element, does it follow that we understand reality? It is empirically always the case that our linguistic theories are inadequate in some way, failing to predict some phenomena, or imposing a wrong ontology that has holes; but even these failures can be understood as relations of the linguistic theory to something that can be incorporated into later linguistic theories.

CTMU considers the base elements of reality to be “syntactic operators” that transform linguistic entities including themselves; reality is therefore conceptualized as a dynamical process transforming linguistic content such as theories. Insofar as our theories better approximate the real over time, there must be some sense in which reality is similar to a “syntactic operator”, although the details of the theory remain to be seen.

Syntactic Closure: The Metaphysical Autology Principle (MAP)

Langan writes: “All relations, mappings and functions relevant to reality in a generalized effective sense, whether descriptive, definitive, compositional, attributive, nomological or interpretative, are generated, defined and parameterized within reality itself.” As a result, reality is closed; there is no way of describing reality except in terms of anything real.

The Metaphysical Autology Principle implies this sort of closure: reality theory must “take the form of a closed network of coupled definitions, descriptions, explanations and interpretations that refer to nothing external to reality itself”.

Autology is the study of one’s self; reality studies itself in the sense of containing predicates about itself and informational manipulators (such as human scientists) that apply these predicates to reality. Reality theory requires a 2-valued logic distinguishing what is real or not real, e.g. it may contain the statement “a predicate of something real is real”.

As an example, consider measuring the size of the universe with a unit length. With a standard ruler, it is possible to measure medium-sized objects, and with theory, it is possible to extrapolate to estimate the size of large objects such as the earth or solar system, or even the entire universe. However, the unit length (the standard ruler) is an object in the universe. There is no “view from nowhere” that contains a measuring unit that can be used to measure the universe. Reality is understood in terms of its own components.

What if there is something like a view from nowhere, e.g. an outer universe simulating ours? “If an absolute scale were ever to be internally recognizable as an ontological necessity, then this would simply imply the existence of a deeper level of reality to which the scale is intrinsic and by which it is itself intrinsically explained as a relative function of other ingredients.” So we include the outer universe in “reality” and note that the outer unit is still part of reality.

An approximate Solomonoff inductor predicts the reality generating its percepts as if it’s external. But, as theorists reasoning about it, we see that there’s a (Solomonoff inductor, external matter, I/​O relation) system, so we know that the inductor is part of reality. Then we look at ourselves looking at this system and note that our reasoning about this inductor is, too, part of reality.

Langan defines the “recurrent fallacy” to be: “The existence of the universe is given and therefore in no need of explanation.” “Is given” hides what needs to be explained, which should be part of reality; explaining reality in terms of reality implies some sort of cyclicality as discussed earlier.

If the universe were inexplicable, that would imply that it came into being by magic; if there is no magic, the “five whys” must bottom out somewhere. I am less certain of Langan that there are no “magic” unexplained phenomena like fundamental randomness (e.g. in anthropics), but I understand that such explanations are inherently less satisfactory than successful deterministic ones.

Syntactic Comprehensivity-Reflexivity: the Mind Equals Reality Principle (M=R)

Langan defines the M=R principle: “The M=R or Mind Equals Reality Principle asserts that mind and reality are ultimately inseparable to the extent that they share common rules of structure and processing.” This is closely related to linguistic reducibility and can be represented as “Mind ∽ Reality”.

Separating mind and reality (e.g. Cartesian dualism) assumes existence of a common medium translating between them. If the soul were in another dimension connected to the Pineal gland, that dimension would presumably itself be in some ways like physical spacetime and contain matter.

Langan writes: “we experience reality in the form of perceptions and sense data from which the existence and independence of mind and objective external reality are induced”. This is similar to Kant’s idea that what we perceive are mental phenomena, not noumena. Any disproof of this idea would be cognitive (as it would have to be evaluated by a mind), undermining a claim of mind-independence. (I am not sure whether this is strictly true; perhaps it’s possible to be “hit” by something outside your mind that is not itself cognition or a proof, which can nonetheless be convincing when processed by your mind?). Perceptions are, following MAP, part of reality.

He discusses the implications of a Kantian phenomenon/​noumenon split: “if the ‘noumenal’ (perceptually independent) part of reality were truly unrelated to the phenomenal (cognition-isomorphic) part, then these two ‘halves’ of reality would neither be coincident nor share a joint medium relating them. In that case, they would simply fall apart, and any integrated ‘reality’ supposedly containing both of them would fail for lack of an integrated model.” Relatedly, Nietzsche concluded that the Kantian noumenon could be dropped, as it is by definition unrelated to any observable phenomena.

Syntactic Coherence and Consistency: The Multiplex Unity Principle (MU)

Langan argues: “we can equivalently characterize the contents of the universe as being topologically ‘inside’ it (topological inclusion), or characterize the universe as being descriptively ‘inside’ its contents, occupying their internal syntaxes as acquired state (descriptive inclusion).”

Topological inclusion is a straightforward interpretation of spacetime: anything we see (including equations on a whiteboard) are within spacetime. On the other hand, such equations aim to “capture” the spaciotemporal universe; to the extent they succeed, the universe is “contained” in such equations. Each of these contaminants enforces consistency properties, leading to the conclusion that “the universe enforces its own consistency through dual self-containment”.

The Principle of Hology (Self-composition)

Langan writes: “because reality requires a syntax consisting of general laws of structure and evolution, and there is nothing but reality itself to serve this purpose, reality comprises its own self-distributed syntax under MU”. As a special case, the language of theoretical physics is part of reality and is a distributed syntax for reality.

Duality Principles

Duality commonly shows up in physics and mathematics. It is a symmetric relation: “if dualizing proposition A yields proposition B, then dualizing B yields A.” For example, a statement about points (e.g. “Two non-coincident points determine a line”) can be dualized to one about lines (“Two non-parallel lines determine a point”) and vice versa.

Langan contrasts between spatial duality principles (“one transposing spatial relations and objects” and temporal duality principles (“one transposing objects or spatial relations with mappings, functions, operations or processes”). This is now beyond my own understanding. He goes on to propose that “Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures”, which is even more beyond my understanding.

The Principle of Attributive (Topological-Descriptive, State-Syntax) Duality

There is a duality between sets and relations/​attributes. The set subset judgment “X is a subset of Y” corresponds to a judgment of implication of attributes, “Anything satisfying X also satisfies Y”. This relates back to duality between topological and descriptive inclusion.

Sets and logic are described with the same structure, e.g. logical and corresponds with set intersection, and logical or corresponds with set union. Set theory focuses on objects, describing sets in terms of objects; logic focuses on attributes, describing constraints to which objects conform. The duality between set theory and logic, accordingly, relates to a duality between states and the syntax to which these states conform, e.g. between a set of valid grammatical sentences and the logical grammar of the language.

Langan writes that the difference between set theory and logic “hinges on the univalent not functor (~), on which complementation and intersection, but not union, are directly or indirectly defined.” It is clear that set complement is defined in terms of logical not. I am not sure what definition of intersection Langan has in mind; perhaps the intersection of A and B is the subset of A that is not outside B?

Constructive-Filtrative Duality

Construction of sets can be equivalently either additive (describing the members of the set) or subtractive (describing a restriction of the set of all sets to ones satisfying a given property). This leads to constructive-filtrative duality: “CF duality simply asserts the general equivalence of these two kinds of process with respect to logico-geometric reality...States and objects, instead of being constructed from the object level upward, can be regarded as filtrative refinements of general, internally unspecified higher-order relations.”

CF duality relates to the question of how it is possible to get something from nothing. “CF duality is necessary to show how a universe can be ‘zero-sum’; without it, there is no way to refine the objective requisites of constructive processes ‘from nothingness’. In CTMU cosmogony, ‘nothingness’ is informationally defined as zero constraint or pure freedom (unbound telesis or UBT), and the apparent construction of the universe is explained as a self-restriction of this potential.”

In describing the universe, we could either have said “there are these things” or “the UBT is restricted in this way”. UBT is similar to the God described in Spinoza’s Ethics, an infinite substance of which every finite thing is a modification, and to the Tegmark IV multiverse.

As an application, consider death: is death a thing, or is it simply that a life is finite? A life can be constructed as a set of moments, or as the set of all possible moments restricted by, among other things, finite lifespan, death as a filter restricting those moments after the time of death from being included in life, similar to Spinoza’s idea that anything finite is derived by bounding something infinite.

Conspansive Duality

There is a duality between cosmic expansion and atom shrinkage. We could either posit that the universe is expanding and the atoms are accordingly getting further apart as space stretches, or we could equivalently posit that atoms themselves are shrinking in a fixed-size space, such that the distances between atoms increase relative to the sizes of each atom.

This is an instance of an ectomorphism: “Cosmic expansion and ordinary physical motion have something in common: they are both what might be called ectomorphisms. In an ectomorphism, something is mapped to, generated or replicated in something external to it.” For example, a set of atoms may be mapped to a physical spacetime that is “external” to the atoms.

Langan critiques ectomorphisms: “However, the Reality Principle asserts that the universe is analytically self-contained, and ectomorphism is inconsistent with self-containment.” Since spacetime is part of reality, mapping atoms to spacetime is mapping them into reality; however, it is unclear how to map spacetime itself to any part of reality. See also Zeno’s Paradox of Place.

In contrast, in endomorphism, “things are mapped, generated or replicated within themselves”. An equation on a whiteboard is in the whiteboard, but may itself describe that same whiteboard; thus, the whiteboard is mapped to a part of itself.

Langan specifically focuses on “conspansive endomorphism”, in which “syntactic objects are injectively mapped into their own hological interiors from their own syntactic boundaries.” I am not sure exactly what this means; my guess is that it means that linguistic objects (“syntactic objects”) are mapped to the interior of what they describe (what is within their “syntactic boundaries”); for example, an equation on a whiteboard might map to the interior of the whiteboard described by the equation.

Conspansion “shifts the emphasis from spacetime geometry to descriptive containment, and from constructive to filtrative processing”, where physical equations are an example of filtration processing, as they describe by placing constraints on their subject matter.

In a conspansive perspective on physics, “Nothing moves or expands ‘through’ space; space is state, and each relocation of an object is just a move from one level of perfect stasis to another.” In Conway’s Game of Life, each space/​state is the state of cells, a grid. Each state is itself static, and “future” states follow from “previous” states, but each particular state is static.

A Minkowsky diagram is a multidimensional graph showing events on a timeline, where time is one of the axes. Interactions between events represent objects, e.g. if in one event a ball is kicked and in another the ball hits the wall, the ball connects these events. This is similar to “resource logics” such as variants of linear logic, in which the “events” correspond to ways of transforming propositions to other propositions, and “objects” correspond to the propositions themselves.

In a quantum context, events include interactions between particles, and objects include particles. Particles don’t themselves have a consistent place or time, as they move in both space and time; events, however, occur at a particular place and time. Due to speed of light limits, future events can only follow from past events that are in their past lightcone. This leads to a discrete, combinatorial, rhizomic view of physics, in which events proceed from combinations of other events, and more complex events are built from simpler earlier events. Accordingly, “spacetime evolves linguistically rather than geometrodynamically”.

From a given event, there is a “circle” of possible places where future events could arise by a given time, based on the speed of light. “Time arises strictly as an ordinal relationship among circles rather than within circles themselves.” Langan argues that, by reframing spacetime and events this way, “Conspansion thus affords a certain amount of relief from problems associated with so-called ‘quantum nonlocality’.” Locality is achieved by restricting which events can interact with other events based on those events’ positions and times, and the position and time of the future interactive event. (I don’t understand the specific application to quantum nonlocality.)

Properties of events, including time and place, are governed by the laws of physics. Somewhat perplexingly, Langan states: “Since the event potentials and object potentials coincide, potential instantiations of law can be said to reside ‘inside’ the objects, and can thus be regarded as functions of their internal rules or ‘object syntaxes’.” My interpretation is that objects restrict what events those objects can be part of, so they are therefore carriers of physical law. I am not really sure how this is supposed to, endomorphically, place all physical law “inside” objects; it is unclear how the earliest objects function to lawfully restrict future ones. Perhaps the first object in the universe contains our universe’s original base physical laws, and all future objects inherit at least some of these, such that these laws continue to be applied to all events in the universe?

Langan contrasts the conspansive picture presented with the more conventional spacetime/​state view: “Thus, conspansive duality relates two complementary views of the universe, one based on the external (relative) states of a set of objects, and one based on the internal structures and dynamics of objects considered as language processors. The former, which depicts the universe as it is usually understood in physics and cosmology, is called ERSU, short for Expanding Rubber Sheet Universe, while the latter is called USRE (ERSU spelled backwards), short for Universe as a Self-Representational Entity.”

Langan claims conspansive duality “is the only escape from an infinite ectomorphic ‘tower of turtles’”: without endomorphism, all objects must be mapped to a space, which can’t itself be “placed” anywhere without risking infinite regress. (Though, as I said, it seems there would have to be some sort of original object to carry laws governing future objects, and it’s unclear where this would come from.)

He also says that “At the same time, conspansion gives the quantum wave function of objects a new home: inside the conspanding objects themselves.” I am not really sure how to interpret this; the wave function correlates different objects/​particles, so it’s unclear how to place the wave function in particular objects.

The Extended Superposition Principle

“In quantum mechanics, the principle of superposition of dynamical states asserts that the possible dynamical states of a quantized system, like waves in general, can be linearly superposed, and that each dynamical state can thus be represented by a vector belonging to an abstract vector space”: in general, wave functions can be “added” to each other, with the probabilities (square amplitudes) re-normalizing to sum to 1.

Langas seeks to explain wave function collapse without resort to fundamental randomness (as in the Copenhagen interpretation). Under many worlds, the randomness of the Born rule is fundamentally anthropic, as the uncertainty over one’s future observations is explained by uncertainty over “where” one is in the wave function.

Physical Markovianism is a kind of physical locality property where events only interact with adjacent events. Conspansion (“extended superposition”) allows events to interact non-locally, as long as the future events are in the light cones of the past events. Langan claims that “the Extended Superposition Principle enables coherent cross-temporal telic feedback”.

Telons are “utile state-syntax relationships… telic attractors capable of guiding cosmic and biological evolution”, somewhat similar to decision-theoretic agents maximizing their own measure. The non-locality of conspansion makes room for teleology: “In extending the superposition concept to include nontrivial higher-order relationships, the Extended Superposition Principle opens the door to meaning and design.” Since teleology claims that a whole system is “designed” according to some objective, there must be nonlocal dependencies; similarly, in a Bayesian network, conditioning on the value of a late variable can increase dependencies among earlier variables.


Truth can be conceptualized as inclusion in a domain: something is real if it is part of the domain of reality. A problem for science is that truth can’t always be determined empirically, e.g. some objects are too far away to observe.

Langan claims that “Truth is ultimately a mathematical concept...truth is synonymous with logical tautology”. It’s unclear how to integrate empirical observations and memory into such a view.

Langan seeks to start with logic and find “rules or principles under which truth is heritable”, yielding a “supertautological theory”. He claims that the following can be mathematically deduced: “nomological covariance, the invariance of the rate of global self-processing (cinvariance), and the internally-apparent accelerating expansion of the system.”

Reduction and Extension

In reduction, “The conceptual components of a theory are reduced to more fundamental component”; in extension, the theory is “extended by the emergence of new and more general relationships among [fundamental components].” These are dual to each other.

“The CTMU reduces reality to self-transducing information and ultimately to telesis, using the closed, reflexive syntactic structure of the former as a template for reality theory.” Scientific explanations need to explain phenomena; it is possible to ask “five whys”, so that scientific explanations can themselves be explained. It is unclear how this chain could bottom out except with a self-explanatory theory.

While biologists try to reduce life to physics, physics isn’t self explanatory. Langan claims that “to explain organic phenomena using natural selection, one needs an explanation for natural selection, including the ‘natural selection’ of the laws of physics and the universe as a whole.”

“Syndiffeonic regression” is “The process of reducing distinctions to the homogeneous syntactic media that support them”. This consists of looking at different rules and finding a medium in which they are expressed (e.g. mathematical language). The process involves “unisection”, which is “a general form of reduction which implies that all properties realized within a medium are properties of the medium itself”.

The Principle of Infocognitive Monism

Although information is often conceptualized as raw bits, information is self-processing because it comes with structure; a natural language sentence has grammar, as do computer programs, which generally feature an automated parser and checker.

Engineering information fields assume “the existence of senders, receivers, messages, channels and transmissive media is already conveniently given”, e.g. computer science assumes the existence of a Turing complete computer. This leaves unclear how these information-processing elements are embedded (e.g. in matter).

“SCSPL” stands for “self configuring self processing language”, which has some things in common with a self-modifying interpreter.

Telic Reducibility and Telic Recursion

“Telic recursion is a fundamental process that tends to maximize a cosmic self-selection parameter, generalized utility, over a set of possible syntax-state relationships in light of the selfconfigurative freedom of the universe”: it is a teleological selection mechanism on infocognition, under which structures that achieve higher “generalized utility” are more likely to exist. This is perhaps a kind of self-ratification condition, where structures that can explain their own origins are more likely to exist.

It is unclear how to explain physical laws, which themselves explain other physical phenomena. Objects and laws are defined in terms of each other, e.g. mass is a property of objects and is measured due to the laws relating mass to measurable quantities. Due to this, Langan argues that “the active medium of cross-definition possesses logical primacy over laws and arguments alike and is thus pre-informational and pre-nomological in nature...i.e., telic. Telesis… is the primordial active medium from which laws and their arguments and parameters emerge by mutual refinement or telic recursion”.

It is unclear how to imagine a “pre-informational” entity. One comparison point is language: we find ourselves speaking English, and referring to other languages within English, but this didn’t have to be the case, the language could have been different. Perhaps “pre-informational” refers to a kind of generality beyond the generality allowing selection of different natural languages?

Telesis even comes before spacetime; there are mental states in which spacetime is poorly defined, and mathematicians and physicists have refined their notion of spacetime over, well, time. (Langan therefore disagrees with Kant, who considers spacetime a priori).

Langan contrasts two stages of telic recursion: “Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage.”

It makes sense for physical laws to be determined along with initial state: among other things, states are constrained by laws, and state configurations are more or less likely depending on the laws.

It sounds like the secondary stage consists of, roughly, dynamical system or MDP-like state transitions. However, Langan goes on to say that “secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion”. These views are explicitly contrasted: “The CTMU, on the other hand [in contrast to deterministic computational and continuum models of reality], is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity.”

Telic recursion provides “reality with a ‘self-simulative scratchpad’ on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes”; one can imagine different agent-like telors “planning out” the universe between them with a shared workspace. Since telic recursion includes the subject matter of anthropics, CTMU implies that anthropics applies after the universe’s creation, not just before. Langan claims that telors are “coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons)”; the notion of “competitive interference” is perhaps similar to Darwinian competition, in which organisms are more likely to exist if they can bring similar organisms about in competition with each other.

The Telic Principle

The Telic principle states: “the universe configures itself according to the requirement that it self-select from a background of undifferentiated ontological potential or telesis...The Telic Principle is responsible for converting potential to actuality in such a way as to maximize a universal self-selection parameter, generalized utility.”

In science, teleology has fallen out of favor, being replaced with the anthropic principle. Anthropics is a case of teleological selection, in which the present determines the past, at least subjectively (the requirement that life exist in the universe determines the universe’s initial conditions).

The Weak Anthropic Principle, which states that we must find ourselves in a universe that has observers, fails to explain why there is a multiverse from which our universe is “selected” according to the presence of observers. The multiverse view can be contrasted with a fine-tuning view, in which there is possible only a single universe that has been “designed” so as to be likely to contain intelligent life.

The Strong Anthropic Principle, on the other hand, states that only universes with intelligent life “actually exist”. This makes reality non-objective in some sense, and implies that the present can determine the past. Anthropics lacks a loop model of self-causality, by which such mutual causality is possible.

We find ourselves in a self consistent structure (e.g. our mathematical and physical notation), but it could have been otherwise, since we could use a different language or mathematical notation, or find ourselves in a universe with different laws. It would therefore be an error to, employing circular reasoning, claim that our consistent structures are universal.

Langan claims that “Unfortunately, not even valid tautologies are embraced by the prevailing school of scientific philosophy, falsificationism”, as tautologies are unfalsifiable. I think Popper would say that tautologies and deductive reasoning are necessary for falsificationist science (in fact, the main motivation for falsificationism is to remove the need for Humean induction).

For anthropic arguments to work, there must be some universal principles: “If the universe is really circular enough to support some form of ‘anthropic’ argument, its circularity must be defined and built into its structure in a logical and therefore universal and necessary way. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure.” I think this is basically correct in that anthropic reasoning requires forms of reasoning that go beyond the reasoning one would use to reason about a universe “from outside”; the laws of the universe must be discoverable from inside and be consistent with such discovery.

The anthropic selection of the universe happens from “UBT”: “Thus, the universe ‘selects itself’ from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general.” Trivially, the a priori from which the universe is anthropically selected must not contain information specifying the universe, as this process is what selects this information. UBT could be compared to the Spinozan god, a substance which all specific entities (including the empirical physical universe) are modes of, or to the Tegmark IV multiverse. Telic recursion, then, must select the immanent empirical experience of this universe out of the general UBT possibility space.

The telic principles implies some forms of “retrocausality”: “In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely separated events, lets the universe ‘retrodict’ itself through meaningful cross-temporal feedback.” An empirical observation of a given universe may be more likely not just because of its present and past, but because of its future, e.g. observations that are more likely to be remembered may be considered more likely (and will be more likely in the empirical remembered past).

Maximization of generalized utility is a kind of economic principle. The self tries to exist more; evolution implies something like this behavior at the limit, although not before the limit. This framework can’t represent all possible preferences, although in practice many preferences are explained by it anyway.


This as far as I got in the summary and review. The remaining sections describe general computer science background such as the Chomsky hierarchy, and SCSPL, a model of reality as a self-processing language.

Overall, I think Langan raises important questions that conventional analytic philosophy has trouble with, such as what more general principle underlies anthropics, and how to integrate cognition into a physics-informed worldview without eliminating it. He presents a number of concepts, such as syndiffeonesis, that are useful in themselves.

The theory is incredibly ambitious, but from my perspective, it didn’t deliver on them. This is partially because the document was hard to understand, but I’m not convinced I’d think CTMU delivers on its ambition if I fully understood it. It’s an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation. In particular, I doubt that conspansion solves quantum locality problems as Langan suggests; conceiving of the wave function as embedded in conspanding objects seems to neglect correlations between the objects implied by the wave function, and the appeal to teleology to explain the correlations seems hand-wavey.

A naive interpretation of CTMU would suggest time travel is possible through telesis, though I doubt Langan would endorse strong implications of this. I’ve written before on anthropics and time travel; universes that don’t basically factor into a causal graph tend to strongly diverge from causality, e.g. in optimizing for making lots of time turners pass the same message back without ever passing a different message back. Anthropics shows that, subjectively, there is at least some divergence from a causal universe model, but it’s important to explain why this divergence is fairly bounded, and we don’t see evidence of strong time travel, e.g. hypercomputation.

Despite my criticisms of the document, it raises a number of important questions for mainstream scientific philosophy, and further study of these questions and solutions to them, with more explication for how the theory “adds up to normality” (e.g. in the cause of causality), might be fruitful. Overall, I found it worthwhile to read the document even though I didn’t understand or agree with all of it.