Sewing-Machine correctly pointed out, above, that this contradicts what we already know.
skepsci
To be pedantic, perhaps I should say the configurations coded by the exponents of larger primes reflect on the configurations encoded by the exponents of smaller primes, since we have the entire computation frozen in amber, as it were.
It eventually learns that the simplest explanation for its experiences is the description of an external lawful universe in which its sense organs are embedded and a description of that embedding.
That’s the simplest explanation for our experiences. It may or may not be the simplest explanation for the experiences of an arbitrary sentient thinker.
Rather than supposing that the probability of a certain universe depends on the complexity of that universe, it takes as a primitive object a probability distribution over possible experiences. By the same reasoning that led a normal Solomonoff inductor to accept the existence of an external universe as the best explanation for its experiences, the least complex description of your conscious experience is the description of an external lawful universe and directions for finding the substructure embodying your experience within that substructure.
Unless I’m misunderstanding you, you’re saying that we should start with an arbitrary prior (which may or may not be the same as Solomonoff’s universal prior). If you’re starting with an arbitrary prior, you have no idea what the best explanation for your experiences is going to be, because it depends on the prior. According to some prior, it’s a Giant lookup table. According to some prior, you’re being emulated by a supercomputer in a universe whose physics is being emulated at the elementary particle level by hand calculations performed by an immortal sentient being (with an odd utility function), who lives in an external lawful universe.
Of course, the same will be true if you take the standard universal prior, but define Kolmogorov complexity relative to a sufficiently bizarre universal Turing machine (of which there are many). According to the theory, it doesn’t matter because over time you will predict your experiences with greater and greater accuracy. But you never update the relative credences you give to different models which make the same predictions, so if you started off thinking that the simulation of the simulation of the simulation was a better model than simply discarding the outer layers and taking the innermost level, you will forever hold the unfalsifiable belief that you live in an inescapable Matrix, even as you use your knowledge to correctly model reality and use your model to maximize your personal utility function (or whatever it is Solomonoff inductors are supposed to do).
The Godel number of a Turing computation encodes not just a single configuration of the machine, but every configuration the machine passes through from beginning to end, so it’s more than just a paused Turing machine. It’s true that there’s no dynamics, but after all there are no dynamics in a timeless universe either, yet there’s reason to suspect we might live in one.
The later configurations reflect on the earlier configurations, which is, for all intents and purposes, active reflection.
Glaring redundancy aside, isn’t “self-introspective” just as intensionally valid or void as “conscious”?
A philosopher says, “This zombie’s skull contains a Giant Lookup Table of all the inputs and outputs for some human’s brain.” This is a very large improbability. So you ask, “How did this improbable event occur? Where did the GLUT come from?”
The philosopher is clearly simulating our universe, since as Eliezer already observed, a Giant Lookup Table won’t fit in our universe. So he may as well be simulating 10^10^10^20 copies of our universe, each with a different Giant Lookup Table, so that every possible Giant Lookup Table gets represented in some simulation. Now the improbability just comes from the law of large numbers, rather than any conscious being. The end result still talks about consciousness, but the root cause of this talking-about-consciousness is no longer a conscious entity, but the mere fact that in a large enough pool of numbers, some of them happen to encode what looks like the output of a conscious being.
Or is it that, for example, the Godel number of a Turing machine computation of a conscious entity is actually conscious? Actually, now that I think of it, I suppose it must be. Weird.
Counting should be illegal.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
Draco seemed far too easily convinced by the evidence that the decline in magic wasn’t caused by muggle interbreeding, especially since the largest piece of evidence presented was based on muggle knowledge of genetics that Draco was nearly completely ignorant of. You’re not going to be convinced that everything you know is wrong just because an 11-year old tells you that a bunch of allegedly smart people figured out some things which conflict with what you know, in the judgment of said 11-year-old.
At the beginning of Chapter 25, Eliezer writes about the evidence that convinced Draco:
everything Harry presents as strong evidence is in fact strong evidence—the other possibilities are improbable
This is true… but only from Harry’s perspective. Harry assigns extremely high confidence to everything that muggle geneticists have figured out, and also believes that he understands the theory well enough to have high confidence in the predictions he makes using that theory. For Draco to find the same evidence equally convincing, he would also have to assign high confidence to the theory that Harry claims muggle geneticists figured out, and high confidence to Harry’s ability to use the theory correctly to make predictions.
I am probably modeling Draco as way more Bayesian than he actually is in this analysis. But my interactions with individuals with other strongly held beliefs also lead to the same conclusion, that Draco should have resisted more.
Issues of how I think Draco would have acted aside, I would have liked to see more interaction between the two of them on this question, more investigations. Maybe Draco would want to perform some of the experiments testing Granger’s parentage that Harry suggested, rather than immediately predicting the same results as Harry would have? I would have enjoyed seeing more of how a budding experimentalist copes when confronted with not just one but a series of experimental results leading inexorably to a chilling (to Draco) conclusion.
To perfectly model your thought processes, it would be enough that your brain activity be deterministic; it doesn’t follow that the universe is deterministic. The fact that my computer can model a Nintendo well enough for me to play video games does not imply that a Nintendo is built out of deterministic elementary particles, and a Nintendo emulator that simulated every elementary particle interaction in the Nintendo it was emulating would be ridiculously inefficient.
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object.
Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity.
Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.