Re: point 7, I found Jessica Taylor’s take on counterfactuals in terms of linear logic pretty compelling.

# zhukeepa(Alex Zhu)

Good question! Yeah, there’s nothing fundamentally quantum about this effect. But if the simulator wants to focus on universes with 1 & 2 fixed (e.g. if they’re trying to calculate the distribution of superintelligences across Tegmark IV), the PNRG (along with the initial conditions of the universe) seem like good places for a simulator to tweak things.

It is not clear to me that this would result in a lower Kolmogorov complexity at all. Such an algorithm could of course use a pseudo-random number generator for the vast majority quantum events which do not affect p(ASI) (like the creation of CMB photons), but this is orthogonal to someone nudging the relevant quantum events towards ASI. For these relevant events, I am not sure that the description “just do whatever favors ASI” is actually shorter than just the sequence of events.

Hmm, I notice I may have been a bit unclear in my original post. When I’d said “pseudorandom”, I wasn’t referring to the use of a pseudo-random number generator instead of a true RNG. I was referring to the “transcript” of relevant quantum events only

*appearing*random, without being “truly random”, because of the way in which they were generated (which I’m thinking of as being better described as “sampled from a space parameterizing the possible ways the world could be, conditional on humanity building superintelligence” rather than “close to truly random, or generated by a pseudo-random RNG, except with nudges toward ASI”.)I mean, if we are simulated by a Turing Machine (which is equivalent to quantum events having a low Kolmogorov complexity), then a TM which just implements the true laws of physics (and cheats with a PNRG, not like the inhabitants would ever notice) is surely simpler than one which tries to optimize towards some distant outcome state.

As an analogy, think about the Kolmogorov complexity of a transcript of a very long game of chess. If both opponents are following a simple algorithm of “determine the allowed moves, then use a PRNG to pick one of them”, that should have a bound complexity. If both are chess AIs which want to win the game (i.e. optimize towards a certain state) and use a deterministic PRNG (lest we are incompressible), the size of your Turing Machine—which /is/ the Kolmogorov complexity—just explodes.Wouldn’t this also serve as an argument against malign consequentialists in the Solomonoff prior, that may make it

*a priori*more likely for us to end up in a world with particular outcomes optimized in their favor?It is not clear to me that this would result in a lower Kolmogorov complexity at all.

[...]

Look at me rambling about universe-simulating TMs. Enough, enough.

To be clear, it’s also not clear to me that this would result in a lower K-complexity either. My main point is that (1) the null hypothesis of quantum events being independent of consciousness rests on assumptions (like assumptions about what the Solomonoff prior is like) that I think are actually pretty speculative, and that (2) there are speculative ways the Solomonoff prior could be in which our consciousness can influence quantum outcomes.

My goal here is not to make a positive case for consciousness affecting quantum outcomes, as much as it is to question the assumptions behind the case against the world working that way.

This. Physics runs on falsifiable predictions. If ‘consciousness can affect quantum outcomes’ is any more true than the classic ‘there is an invisible dragon in my garage’, then discovering that fact would seem easy from an experimentalist standpoint. Sources of quantum randomness (e.g. weak source+detector) are readily available, so any claimant who thinks they can predict or affect their outcomes could probably be tested initially for a few 100$.

Yes, I’m also bearish on consciousness affecting quantum outcomes in ways that are as overt and measurable in the way you’re gesturing at. The only thing I was arguing in this post is that the effect size of consciousness on quantum outcomes is

*maybe more than zero*, as opposed to*obviously exactly zero*. I don’t think of myself as having made any arguments that the effect size should be non-negligible, although I also don’t think that possibility has been ruled out for non-neglible effect sizes lying somewhere between “completely indistinguishable from no influence at all” and “overt and measurable to the extent a proclaimed psychic could reproducibly affect quantum RNG outcomes”.

I’ll take a stab at this. Suppose we had strong

*a priori*reasons for thinking it’s in our logical past that we’ll have created a superintelligence of*some*sort. Let’s suppose that some particular quantum outcome in the future can get chaotically amplified, so that in one Everett branch humanity never builds any superintelligence because of some sort of global catastrophe (say with 99% probability, according to the Born rule), and in some other Everett branch humanity builds some kind of superintelligence (say with 1% probability, according to the Born rule). Then we should expect to end up in the Everett branch in which humanity builds some kind of superintelligence with ~100% probability, despite the Born rule saying we only have a 1% chance of ending up there, because the “99%-likely” Everett branch was ruled out by our*a priori*reasoning.I’m not sure if this is the kind of concrete outcome that you’re asking for. I imagine that, for the most part, the kind of universe I’m describing will still yield frequencies that converge on the Born probabilities, and for the most part appear indistinguishable from a universe in which quantum outcomes are “truly random”. See my reply to Joel Burget for some more detail about how I think about this hypothesis.

If we performed a trillion

^{50}⁄_{50}quantum coin flips, and found a program with K-complexity far less than a trillion that could explain these outcomes, that would be an example of evidence in favor of this hypothesis. (I don’t think it’s very likely that we’ll be able to find a positive result if we run that particular experiment; I’m naming it more to illustrate the kind of thing that would serve as evidence.) (EDIT: This would only serve as evidence against quantum outcomes being truly random. In order for it to serve as evidence in favor of quantum outcomes being impacted by consciousness, the low K-complexity program explaining these outcomes would need to route through the decisions of conscious beings somehow; it wouldn’t work if the program were just printing out digits of pi in binary, for example.)My inside view doesn’t currently lead me to put much credence on this picture of reality actually being true. My inside view is more like “huh, I notice I have become way more uncertain about the

*a priori*arguments about what kind of universe we live in—especially the arguments that we live in a universe in which quantum outcomes are supposed to be ‘truly random’—so I will expand my hypothesis space for what kinds of universes we might be living in”.- 20 Apr 2024 17:32 UTC; 1 point) 's comment on CTMU insight: maybe consciousness *can* affect quantum outcomes? by (

Shortly after publishing this, I discovered something written by John Wheeler (whom Chris Langan cites) that feels thematically relevant. From Law Without Law:

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

I finally wrote one up! It ballooned into a whole LessWrong post.

# CTMU insight: maybe consciousness *can* affect quantum outcomes?

It seems if I only read the main text, the obvious interpretation is that points are events and the circles restrict which other events they can interact with.

This seems right to me, as far as I can tell, with the caveat that “restrict” (/ “filter”) and “construct” are two sides of the same coin, as per constructive-filtrative duality.

From the diagram text, it seems he is instead saying that each circle represents entangled wavefunctions of some subset of objects that generated the circle.

I think each circle represents the entangled wavefunctions of

*all*of the objects that generated the circle, not just some subset.Relatedly, you talk about “the” wave function in a way that connotes a single universal wave function, like in many-worlds. I’m not sure if this is what you’re intending, but it seems plausible that the way you’re imagining things is different from how my model of Chris is imagining things, which is as follows: if there are N systems that are all separable from one another, we could write a universal wave function for these N systems that we could factorize as ψ_1 ⊗ ψ_2 ⊗ … ⊗ ψ_N, and there would be N inner expansion domains (/ “circles”), one for each ψ_i, and we can think of each ψ_i as being “located within” each of the circles.

Great. Yes, I think that’s the thing to do. Start small! I (and presumably others) would update a lot from a new piece of

*actual formal mathematics*from Chris’s work. Even if that work was, by itself, not very impressive.(I would also want to check that that math had something to do with his earlier writings.)

I think we’re on exactly the same page here.

Please be prepared for the possibility that Chris is very smart and creative, and that he’s had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.

That’s certainly been a live hypothesis in my mind as well, that I don’t think can be ruled out before I personally see (or produce) a piece of formal math (that most mathematicians would consider formal, lol) that captures the core ideas of the CTMU.

So Chris either (i) doesn’t realize that you need to be precise to communicate with mathematicians, or (ii) doesn’t understand how to be precise.

While I agree that there isn’t very much explicit and precise mathematical formalism in the CTMU papers themselves, my best guess is that (iii) Chris

*does*unambiguously gesture at a precise structure he has in mind,*assuming*a sufficiently thorough understanding of the background assumptions in his document (which I think is a false assumption for most mathematicians reading this document). By analogy, it seems plausible to me that Hegel was gesturing at something quite precise in some of his philosophical works, that only got mathematized nearly 200 years later by category theorists. (I don’t understand any Hegel myself, so take this with a grain of salt.)

Except, I can already predict you’re going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It’s a little suspicious.

False! :P I think no part of his framework can be

*completely*understood without the whole, but I think the big pictures of some core ideas can be understood in relative isolation. (Like syndiffeonesis, for example.) I think this is plausibly true for his alternatives to well-ordering as well.If you’re going to fund someone to do something, it should be to formalize Chris’s work. That would not only serve as a BS check, it would make it vastly more approachable.

I’m very on board with formalizing Chris’s work, both to serve as a BS check and to make it more approachable. I think formalizing it in full will be a pretty nontrivial undertaking, but formalizing isolated components feels tractable, and is in fact where I’m currently directing a lot of my time and funding.

“gesture at something formal”—not in the way of the “grammar” it isn’t. I’ve seen rough mathematics and proof sketches, especially around formal grammars. This isn’t that, and it isn’t trying to be.

[...]

Nonsense! If Chris has an alternative to well-ordering, that’s of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it.

My claim was specifically around whether it would be worth people’s time to attempt to decipher Chris’s written work, not whether there’s value in Chris’s work that’s of general mathematical interest. If I succeed at producing formal artifacts inspired by Chris’s work, written in a language that is far more approachable for general academic audiences, I

*would*recommend for people to check those out.That said, I am very sympathetic to the question “If Chris has such good ideas that he claims he’s formalized, why hasn’t he written them down formally—or at least gestured at them formally—in a way that most modern mathematicians or scientists can recognize? Wouldn’t that clearly be in his self-interest? Isn’t it pretty suspicious that he hasn’t done that?”

My current understanding is that he believes that his current written work

*should*be sufficient for modern mathematicians and scientists to understand his core ideas, and insofar as they reject his ideas, it’s because of some combination of them not being intelligent and open-minded enough, which he can’t do much about. I think his model is… not exactly false, but is also definitely not how I would choose to characterize most smart people who are skeptical of Chris.To understand why Chris thinks this way, it’s important to remember that he had never been acculturated into the norms of the modern intellectual elite—he grew up in the midwest, without much affluence; he had a physically abusive stepfather he kicked out of his home by lifting weights; he was expelled from college for bureaucratic reasons, which pretty much ended his relationship with academia (IIRC); he mostly worked blue-collar jobs throughout his adult life; AND he may

*actually*have been smarter than almost anybody he’d ever met or heard of. (Try picturing what von Neumann may have been like if he’d had the opposite of a prestigious and affluent background, and had gotten spurned by most of the intellectuals he’d talked to.) Among other things, Chris hasn’t had very many intellectual peers who could gently inform him that many portions of his written work that he considers totally obvious and straightforward are actually not at all obvious for a majority of his intended audience.On the flip side, I think this means there’s a lot of low-hanging fruit in translating Chris’s work into something more digestible by the modern intelletual elite.

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

Gotcha! I’m happy to do that in a followup comment.

I’d categorize this section as “not even wrong”; it isn’t doing anything formal enough to have a mistake in it.

I think it’s an attempt to gesture at something formal within the framework of the CTMU that I think you can only really understand if you grok enough of Chris’s preliminary setup. (See also the first part of my comment here.)

(Perhaps you’d run into issues with making the sets well-ordered, but if so he’s running headlong into the same issues.)

A big part of Chris’s preliminary setup is around how to sidestep the issues around making the sets well-ordered. What I’ve picked up in my conversations with Chris is that part of his solution involves mutually recursively defining objects, relations, and processes, in such a way that they all end up being “bottomless fractals” that cannot be fully understood from the perspective of any existing formal frameworks, like set theory. (Insofar as it’s valid for me to make analogies between the CTMU and ZFC, I would say that these “bottomless fractals” violate the axiom of foundation, because they have downward infinite membership chains.)

I’m really not seeing any value in this guy’s writing. Could someone who got something out of it share a couple specific insights that got from it?

I think Chris’s work is most valuable to engage with for people who have independently explored philosophical directions similar to the ones Chris has explored; I don’t recommend for most people to attempt to decipher Chris’s work.

I’m confused why you’re asking about specific insights people have gotten when Jessica has included a number of insights she’s gotten in her post (e.g. “He presents a number of concepts, such as syndiffeonesis, that are useful in themselves.”).

Thanks a lot for posting this, Jessica! A few comments:

It’s an alternative ontology, conceiving of reality as a self-processing language, which avoids some problems of more mainstream theories, but has problems of its own, and seems quite underspecified in the document despite the use of formal notation.

I think this is a reasonable take. My own current best guess is that the contents of the document uniquely specifies a precise theory, but that it’s very hard to understand what’s being specified without grokking the details of all the arguments he’s using to pin down the CTMU. I partly believe this because of my conversations with Chris, and I partly believe this because someone else I’d funded to review Chris’s work (who had extensive prior familiarity with the kinds of ideas and arguments Chris employs) managed to make sense of most of the CTMU (including the portions using formal notation) based on Chris’s written work alone, in a way that Chris has vetted over the course of numerous three-way Zoom calls.

In particular, I doubt that conspansion solves quantum locality problems as Langan suggests; conceiving of the wave function as embedded in conspanding objects seems to neglect correlations between the objects implied by the wave function, and the appeal to teleology to explain the correlations seems hand-wavey.

I’m actually not sure which quantum locality problems Chris is referring to, but I don’t think the thing Chris means by “embedding the wave function in conspanding objects” runs into the problems you’re describing. Insofar as one object is correlated with others via quantum entanglement, I think those other objects would occupy the same circle—from the subtext of Diagram 11 on page 28,

*The result is a Venn diagram in which circles represent objects and events, or (n>1)-ary interactive relationships of objects. That is, each circle depicts the “entangled quantum wavefunctions” of the objects which interacted with each other to generate it.*- 3 Apr 2024 1:03 UTC; 1 point) 's comment on The Cognitive-Theoretic Model of the Universe: A Partial Summary and Review by (

In particular, I think this manifests in part as an extreme lack of humility.

I just want to note that, based on my personal interactions with Chris, I experience Chris’s “extreme lack of humility” similarly to how I experience Eliezer’s “extreme lack of humility”:

in both cases, I think they have plausibly calibrated beliefs about having identified certain philosophical questions that are of crucial importance to the future of humanity, that most of the world is not taking seriously,

^{[1]}leading them to feel a particular flavor of frustration that people often interpret as an extreme lack of humilityin both cases, they are in some senses incredibly humble in their pursuit of truth, doing their utmost to be extremely honest with themselves about where they’re confused

- ^
It feels worth noting that Chris Langan has written about Newcomb’s paradox in 1989, and that his resolution involves thinking in terms of being in a simulation, similarly to what Andrew Critch has written about.

I’ve spent 40+ hours talking with Chris directly, and for me, a huge part of the value also comes from seeing how Chris synthesizes all these ideas into what appears to be a coherent framework.

Here’s my current understanding of what Scott meant by “just a little off”.

I think exact Bayesian inference via Solomonoff induction doesn’t run into the trapped prior problem. Unfortunately, bounded agents like us can’t do exact Bayesian inference via Solomonoff induction, since we can only consider a finite set of hypotheses at any given point. I think we try to compensate for this by recognizing that this list of hypotheses is incomplete, and appending it with new hypotheses

*whenever it seems like our current hypotheses are doing a sufficiently terrible job of explaining the input data*.One side effect is that if the true hypothesis (eg “polar bears are real”) is not among our currently considered hypotheses, but our currently considered hypotheses are doing a

*sufficiently non-terrible job of explaining the input data*(eg if the hypothesis “polar bears aren’t real, but there’s a lot of bad evidence suggesting that they are” is included, and the data is noisy enough that this hypothesis is reasonable), we just never even end up considering the true hypothesis. There wouldn’t be accumulating likelihood ratios in favor of polar bears, because actual polar bears were never considered in the first place.I think something similar is happening with phobias. For example, for someone with a phobia of dogs, I think the (subconscious, non-declarative) hypothesis “dogs are safe”

*doesn’t actually get considered*until the subject is well into exposure therapy, after which they’ve accumulated enough evidence that’s sufficiently inconsistent with their prior hypotheses of dogs being scary and dangerous that they start considering alternative hypotheses.In some sense this algorithm is “going out of its way to do something like compartmentalization”, in that it’s actively trying to fit all input data into its current hypotheses (/ “compartments”) until this method no longer works.

# [Question] Have the lockdowns been worth it?

Yep! I addressed this point in footnote [3].

No direct connections that I’m aware of (besides non-classical logics being generally helpful for understanding the sorts of claims the CTMU makes).