To perfectly model your thought processes, it would be enough that your brain activity be deterministic; it doesn’t follow that the universe is deterministic. The fact that my computer can model a Nintendo well enough for me to play video games does not imply that a Nintendo is built out of deterministic elementary particles, and a Nintendo emulator that simulated every elementary particle interaction in the Nintendo it was emulating would be ridiculously inefficient.
skepsci
Draco seemed far too easily convinced by the evidence that the decline in magic wasn’t caused by muggle interbreeding, especially since the largest piece of evidence presented was based on muggle knowledge of genetics that Draco was nearly completely ignorant of. You’re not going to be convinced that everything you know is wrong just because an 11-year old tells you that a bunch of allegedly smart people figured out some things which conflict with what you know, in the judgment of said 11-year-old.
At the beginning of Chapter 25, Eliezer writes about the evidence that convinced Draco:
everything Harry presents as strong evidence is in fact strong evidence—the other possibilities are improbable
This is true… but only from Harry’s perspective. Harry assigns extremely high confidence to everything that muggle geneticists have figured out, and also believes that he understands the theory well enough to have high confidence in the predictions he makes using that theory. For Draco to find the same evidence equally convincing, he would also have to assign high confidence to the theory that Harry claims muggle geneticists figured out, and high confidence to Harry’s ability to use the theory correctly to make predictions.
I am probably modeling Draco as way more Bayesian than he actually is in this analysis. But my interactions with individuals with other strongly held beliefs also lead to the same conclusion, that Draco should have resisted more.
Issues of how I think Draco would have acted aside, I would have liked to see more interaction between the two of them on this question, more investigations. Maybe Draco would want to perform some of the experiments testing Granger’s parentage that Harry suggested, rather than immediately predicting the same results as Harry would have? I would have enjoyed seeing more of how a budding experimentalist copes when confronted with not just one but a series of experimental results leading inexorably to a chilling (to Draco) conclusion.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
A philosopher says, “This zombie’s skull contains a Giant Lookup Table of all the inputs and outputs for some human’s brain.” This is a very large improbability. So you ask, “How did this improbable event occur? Where did the GLUT come from?”
The philosopher is clearly simulating our universe, since as Eliezer already observed, a Giant Lookup Table won’t fit in our universe. So he may as well be simulating 10^10^10^20 copies of our universe, each with a different Giant Lookup Table, so that every possible Giant Lookup Table gets represented in some simulation. Now the improbability just comes from the law of large numbers, rather than any conscious being. The end result still talks about consciousness, but the root cause of this talking-about-consciousness is no longer a conscious entity, but the mere fact that in a large enough pool of numbers, some of them happen to encode what looks like the output of a conscious being.
Or is it that, for example, the Godel number of a Turing machine computation of a conscious entity is actually conscious? Actually, now that I think of it, I suppose it must be. Weird.
Counting should be illegal.
Glaring redundancy aside, isn’t “self-introspective” just as intensionally valid or void as “conscious”?
The Godel number of a Turing computation encodes not just a single configuration of the machine, but every configuration the machine passes through from beginning to end, so it’s more than just a paused Turing machine. It’s true that there’s no dynamics, but after all there are no dynamics in a timeless universe either, yet there’s reason to suspect we might live in one.
The later configurations reflect on the earlier configurations, which is, for all intents and purposes, active reflection.
It eventually learns that the simplest explanation for its experiences is the description of an external lawful universe in which its sense organs are embedded and a description of that embedding.
That’s the simplest explanation for our experiences. It may or may not be the simplest explanation for the experiences of an arbitrary sentient thinker.
Rather than supposing that the probability of a certain universe depends on the complexity of that universe, it takes as a primitive object a probability distribution over possible experiences. By the same reasoning that led a normal Solomonoff inductor to accept the existence of an external universe as the best explanation for its experiences, the least complex description of your conscious experience is the description of an external lawful universe and directions for finding the substructure embodying your experience within that substructure.
Unless I’m misunderstanding you, you’re saying that we should start with an arbitrary prior (which may or may not be the same as Solomonoff’s universal prior). If you’re starting with an arbitrary prior, you have no idea what the best explanation for your experiences is going to be, because it depends on the prior. According to some prior, it’s a Giant lookup table. According to some prior, you’re being emulated by a supercomputer in a universe whose physics is being emulated at the elementary particle level by hand calculations performed by an immortal sentient being (with an odd utility function), who lives in an external lawful universe.
Of course, the same will be true if you take the standard universal prior, but define Kolmogorov complexity relative to a sufficiently bizarre universal Turing machine (of which there are many). According to the theory, it doesn’t matter because over time you will predict your experiences with greater and greater accuracy. But you never update the relative credences you give to different models which make the same predictions, so if you started off thinking that the simulation of the simulation of the simulation was a better model than simply discarding the outer layers and taking the innermost level, you will forever hold the unfalsifiable belief that you live in an inescapable Matrix, even as you use your knowledge to correctly model reality and use your model to maximize your personal utility function (or whatever it is Solomonoff inductors are supposed to do).
To be pedantic, perhaps I should say the configurations coded by the exponents of larger primes reflect on the configurations encoded by the exponents of smaller primes, since we have the entire computation frozen in amber, as it were.
Sewing-Machine correctly pointed out, above, that this contradicts what we already know.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
Also on rita skeeter, they either memory charmed her into believing everything herself, or imperiused her into doing so, or simply used the polyjuice potion to turn it in themselves. My money would be memory charming her because it would be the most practical reliable way of getting her to believe everything, writing an authentic article and convincing her editor to run it the next day as the headliner.
My most plausible hypothesis is that their plan for fooling Rita Skeeter is some incredibly clever black box that Eliezer hasn’t bothered to fill in, even for himself, because it’s simply not that important to the plot to waste time coming up with something suitably clever they might have done. Any attempt to figure out what they did would then be wasted, since the author can’t be dropping clues to an answer he doesn’t even know.
Well, you can. It’s just oxymoronic, or at least ironic. Because belief is contrary to the Bayesian paradigm. You use Bayesian methods to choose an action. You have a set of observations, and assign probabilities to possible outcomes, and choose an action.
If you’re always using Bayesian methods to choose an action, it doesn’t matter what value of P(Bayes’ theorem) is set in your skull; it may as well be 1. If Bayes’ theorem is built into your very thought processes, if it’s false you’re fucked.
You might be able to get around this by following Bayesian methods to choose an action as long as P(Bayes’ theorem)>.5, and then scrapping your entire decision-making algorithm and building a new one from scratch when this stops being true. But how do you decide on a new decision-making algorithm once your old decision-making algorithm has failed you?
A hypothesis I’m currently toying with: Quirrell and HJPEV are different versions of the same individual, in some sense, and the Quirrell version is using some form of magic (probably involving breaking the 6-hour-limit on sending information backwards through time, possibly involving possession of a real Quirrell) to carry out a process of recursive self-improvement on himself. The story we’re currently reading takes place in one iteration of the loop.
Has anyone posted this idea before on the net?
There are some serious problems with this hypothesis:
Quirrell and HJPEV appear to have very different utility functions.
Performing recursive self-improvement starting with a human hardly seems like the kind of thing Eliezer would advocate, considering the likelihood of ending up with an unfriendly superintelligence.
So it’s probably wrong, but I thought it was interesting enough to post.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
Is there some background here I’m not getting? Because this reads like you’ve talked someone into committing suicide over IRC...
Thank you. I knew that, but didn’t make the association.
What is the advantage of the Kolmogorov complexity prior?
I noticed an obvious fallacy in the linked argument:
If infinite person-years possible, life extension is amoral.
What? Surely if infinite person-years are possible, it’s better for everyone to be immortal than only some, so life extension would be morally preferable, not morally neutral.
Also, why are we assuming the number of person-years lived is independent of the average lifespan? All he exhibited was an upper bound independent of the average lifespan, which is not at all the same thing. If you can’t justify the hypothesis that lifespan is a zero-sum game, the entire argument falls apart.
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object.
Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity.
Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.