(my guess is that other systems play a significant role in our conscious attitude towards pleasure)
Curious what systems you have in mind here.
(my guess is that other systems play a significant role in our conscious attitude towards pleasure)
Curious what systems you have in mind here.
Right. But just doing a finite number of million-fold-increase bets doesn’t seem so crazy to me. I think this is confounded a bit by the resources in the universe being mind-bogglingly large already, so it feels hard to imagine doubling the future utility. As a thought experiment, consider the choice between the following futures: (a) guarantee of 100 million years of flourishing human civilization, but no post-humans or leaving the solar system, (b) 50% chance extinction, 50% chance intergalactic colonization and transhumanism. To me option (b) feels more intuitively appealing.
I disagree strongly with SBF’s answer to Tyler Cowen’s Earth doubling problem, but I don’t know what to do once the easy escapes are closed off. I’m curious if someone does.
Have a bounded utility function?
Presumably the bins are given some sort of prefix-free code, so that when a behavior-difference is revealed within a bin (e.g. after more time has passed) it can be split into two bins, with some rule for which one is “default
I only just realized that you’re mainly thinking of the complexity of semimeasures on infinite sequences, not the complexity of finite strings. I guess that should have been obvious from the OP; the results I’ve been citing are about finite strings. My bad! For semimeasures, this paper proves that there actually is a non-constant gap between the log-total-probability and description complexity. Instead the gap is bounded by the Kolmogorov complexity of the length of the sequences. This is discussed in section 4.5.4 of Li&Vitanyi.
Fair. I was summarizing “dominates within a constant factor” as just “dominates”, which admittedly is not totally accurate ;)
I don’t understand the mechanism that lets you can find some clever way to check which (gauge, wavefunction) pairs are legal
I’m not really sure how gauge-fixing works in QM, so I can’t comment here. But I don’t think it matters: there’s no need to check which pairs are “legal”, you just run all possible pairs in parallel and see what observables they output. Pairs which are in fact gauge-equivalent will produce the same observables by definition, and will accrue probability to the same strings.
Perhaps you’re worried that physically-different worlds could end up contributing identical strings of observables? Possibly, but (a) I think if all possible strings of observables are the same they should be considered equivalent, so that could be one way of distinguishing them (b) this issue seems orthogonal to k-complexity VS. alt-complexity.
Like, can we specialize this new program to a program that’s just ‘dovetailing’ across all possible gauge-choices and then running physics on ?
You could specialize to just dovetailing across gauges, but this might make the program longer since you’d need to specify physics.
So it presumably has to pick one of them to “predict”, but how is it doing this?
I don’t think you need to choose a particular history to predict since all observables are gauge-invariant. So say we’re trying to predict some particular sequence of observables in the universe. You run your search over all programs, the search includes various choices of gauge/initialization. Since the observables are gauge-invariant each of those choices of gauge generate the same sequence of observables, so their algorithmic probability accumulates to a single string. Once the string has accumulated enough algorithmic probability we can refer to it with a short codeword(this is the tricky part, see below)
But presumably sampling isn’t good enough, because the random seed needed to hit a particular cluster of my choosing could be arbitrarily improbable
The idea is that, using a clever algorithm, you can arrange the clusters in a contiguous way inside . Since they’re contiguous, if a particular cluster has volume we can refer to it with a codeword of length .
The OP is about which measure assigns a higher value to our universe, not just which one is more convenient.
It’s interesting that K-complexity has this property that there is always a single explanation which dominates the space of possible explanations. Intuitively it seems like there are often cases where there are many qualitatively different yet equally valid explanations for a given phenomenon. I wonder if there are natural complexity measures with this property as well—I think that this might be the case for Levin complexity[1], and this could provide an interesting way of defining ‘emergence’ quantitatively.
Or not?? I haven’t read this paper closely yet but sounds like it might prove a similar “one hypothesis dominates” theorem for Levin complexity
The programs used in the proof basically work like this: they dovetail all programs and note when they halt/output, keeping track of how much algorithmic probability has been assigned to each string. Then a particular string is selected using a clever encoding scheme applied to the accrued algorithmic probability. So the gauge theory example would just be a special case of that, applied to the accumulated probability from programs in various gauges.
As John conjectured, alt-complexity is a well-known notion in algorithmic information theory, and differs from K-complexity by at most a constant. See section 4.5 of this book for a proof. So I think the stuff about how physics favors alt-complexity is a bit overstated—or at least, can only contribute a bounded amount of evidence for alt-complexity.
ETA: this result is about the complexity of finite strings, not semimeasures on potentially infinite sequences; for such semimeasures, there actually is a non-constant gap between the log-total-probability and description complexity.
As recent events have illustrated, stimulant use can also have its downsides.
I think “proof” as used in the post was meant to refer to neither the colloquial use nor fully-formalized statements, but proofs as they are actually used by human mathematicians: written arguments which the community of mathematicians believe could be formalized if necessary. Then the question is how much confidence we should have in a statement given that such a “proof” exists. Mathematicians are quite good at determining which proofs are valid, but as the post points out, they are not infallible.
I’m saying the study of novel mathematical structures is analogous to such probing. At first, one can only laboriously perform step-by-step deductions from the axioms, but as one does many such deductions, intuition and understanding can be developed. This is enabled by formalization.
Yeah, I think the “varying the axioms” thing makes more sense for math in particular, not so much the other sciences. As you say, the equivalent thing in the natural sciences is more like experimentation.
Maybe we can roughly unify them? In both cases, we have some domain where we understand phenomena well. Using this understanding, we develop tools that allow us to probe a new domain which we understand less well. After repeated probing, we develop an intuition/understanding of this new domain as well, allowing us to develop tools to explore further domains.
Kinda sounds like an apocalyptic religious tract. I also don’t think it makes sense for the AI to discard all information about life on Earth; at a minimum, that information would be very important for predictions about potential aliens.
It is more of a Hegelian/Kuhnian model of phase transitions after a lot of data accumulation and processing.
But in the case of hyperbolic geometry, the accumulation of “data” came from working out the consequence of varying the axioms, right? So I don’t think this necessarily contradicts the OP. We have a set of intuitions, which we can formalize and distill into axioms. Then by varying the axioms and systematically working out their consequences, we can develop new intuitions.
I wouldn’t use the phrase “transforms trivially” here since a “trivial transformation” usually refers to the identity transformation
No, I do mean the identity transformation. Scalar fields do not transform at all under coordinate changes. To be precise, if we have a coordinate change matrix , a scalar field transforms like
Whereas a vector field transforms like
For more details check out these wikipedia pages.
As far as I can tell, there’s actually no mathematical difference between a vector field in 3D and a 3-scalar field that assigns a 3D scalar to each point.
The difference is in how they transform under coordinate changes. To physicists, a vector field is defined by how it transforms. So this:
You can do replace the vector field V(x) with a 3-scalar field and see the same thing
is not correct; by definition, a 3-scalar field should transform trivially under coordinate changes.
I think your proposal is the same as regular UDASSA. The “claw” and “world” aren’t intrinsic parts of the framework, they’re just names Carlsmith uses to denote a two-part structure that frequently appears in programs used to generate our observations.