I would worry the effect this may have on your credit rating if anyone catches you at it, together with possibly more serious effects. This could potentially be considered fraud. Altogether it seems much more sensible to simply live within your means and pay off your credit balance each month.
skepsci
What is the advantage of the Kolmogorov complexity prior?
Draco seemed far too easily convinced by the evidence that the decline in magic wasn’t caused by muggle interbreeding, especially since the largest piece of evidence presented was based on muggle knowledge of genetics that Draco was nearly completely ignorant of. You’re not going to be convinced that everything you know is wrong just because an 11-year old tells you that a bunch of allegedly smart people figured out some things which conflict with what you know, in the judgment of said 11-year-old.
At the beginning of Chapter 25, Eliezer writes about the evidence that convinced Draco:
everything Harry presents as strong evidence is in fact strong evidence—the other possibilities are improbable
This is true… but only from Harry’s perspective. Harry assigns extremely high confidence to everything that muggle geneticists have figured out, and also believes that he understands the theory well enough to have high confidence in the predictions he makes using that theory. For Draco to find the same evidence equally convincing, he would also have to assign high confidence to the theory that Harry claims muggle geneticists figured out, and high confidence to Harry’s ability to use the theory correctly to make predictions.
I am probably modeling Draco as way more Bayesian than he actually is in this analysis. But my interactions with individuals with other strongly held beliefs also lead to the same conclusion, that Draco should have resisted more.
Issues of how I think Draco would have acted aside, I would have liked to see more interaction between the two of them on this question, more investigations. Maybe Draco would want to perform some of the experiments testing Granger’s parentage that Harry suggested, rather than immediately predicting the same results as Harry would have? I would have enjoyed seeing more of how a budding experimentalist copes when confronted with not just one but a series of experimental results leading inexorably to a chilling (to Draco) conclusion.
It is bad luck to be superstitious.
-Andrew W. Mathis
It proves that mistakes have been made, but in the end, no, I don’t think it’s terribly useful evidence for evaluating the rate of wrongful convictions. Why not? There have been 289 post-conviction DNA exonerations in US history, mostly in the last 15 years. That gives a rate of under 20 per year. Suppose 10,000 people a year are incarcerated for the types of crime that DNA exoneration is most likely to be possible for, namely murder and rape (I couldn’t find exact figures, but I suspect the real number is at least this big). Then considering DNA exonerations gives us a lower bound of something like .2% on the error rate of US courts.
That is only useful evidence about the error rate if your prior estimate of the inaccuracy was less than that, and I mean, come on, really? Only one conviction in 500 is a mistake?
How do you enforce the 10% salary tithe?
One obvious difficulty in educating children for free and then expecting them to pay you back after they become educated is that, most places, minors cannot enter into legally binding contracts. So the kid graduates, gets a great job (in a country that won’t recognize the contract), and says, “I never agreed to pay you 10% of my salary, so I’m keeping it.”
The interesting thing is now that we can formalize various inductive hypotheses as priors such as “Everything goes” as a uniform distribution.
A uniform distribution on what? If you start with a uniform distribution on binary sequences, you don’t get to perform inductive reasoning at all, as the observables X(1), X(2), etc. are all independent in that distribution. If you wanted to start with a uniform distribution on computable universes, you can’t, because there is no uniform distribution with countable support.
I model basically everyone I interact with as an agent. This is useful when trying to get help from people who don’t want to help you, such as customer service or bureaucrats. By giving the agent agency, it’s easy to identify the problem: the agent in question wants to get rid of you with the least amount of effort so they can go back to chatting with their coworkers/browsing the internet/listening to the radio. The solution is generally to make it seem like less effort to get rid of you by helping you with your problem (which is their job after all) than something else. This can be done by simply insisting on being helped, making a ruckus, or asking for a manager, depending on the situation.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
It’s also completely ridiculous, with a sample size of ~10 questions, to give the success rate and probability of being well calibrated as percentages with 12 decimals. Since the uncertainty in such a small sample is on the order of several percent, just round to the nearest percentage.
The assumption was that 80% of defendants are guilty, which is more than 4 of 8. Under this assumption, asking whether p(guilty|convicted) > 80% is just asking whether conviction positively correlates with guilt. Asking if p(innocent|acquitted) > 20% is just asking if acquittal positively correlates with innocence. These are really the same question, because P correlates with Q iff ¬P correlates with ¬Q.
A hypothesis I’m currently toying with: Quirrell and HJPEV are different versions of the same individual, in some sense, and the Quirrell version is using some form of magic (probably involving breaking the 6-hour-limit on sending information backwards through time, possibly involving possession of a real Quirrell) to carry out a process of recursive self-improvement on himself. The story we’re currently reading takes place in one iteration of the loop.
Has anyone posted this idea before on the net?
There are some serious problems with this hypothesis:
Quirrell and HJPEV appear to have very different utility functions.
Performing recursive self-improvement starting with a human hardly seems like the kind of thing Eliezer would advocate, considering the likelihood of ending up with an unfriendly superintelligence.
So it’s probably wrong, but I thought it was interesting enough to post.
Outside of mathematical logic, some familiar examples include:
compactness vs. sequential compactness—generalizing from metric to topological spaces
product topology vs. box topology—generalizing from finite to infinite product spaces
finite-dimensional vs. finitely generated (and related notions, e.g. finitely cogenerated)—generalizing from vector spaces to modules
pointwise convergence vs. uniform convergence vs. norm-convergence vs. convergence in the weak topology vs....—generalizing from sequences of numbers to sequences of functions
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don’t see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Does that story jibe with your understanding?
Is there some background here I’m not getting? Because this reads like you’ve talked someone into committing suicide over IRC...
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object.
Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity.
Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.
Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.
Why not expect this trend to continue with our best model of reality becoming more and more complex?
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
I would be very interested if anyone has good examples of this phenomenon.
There are a few “triads” mentioned in the intellectual hipster article, but the only one that really seems to me like a good example of this phenomenon is the “don’t care about Africa / give aid to Africa / don’t give aid to Africa” triad.
Yay, I wasn’t last!
Still, I’m not surprised that laziness did not pay off. I wrote a simple bot, then noticed that it cooperated against defectbot and defected against itself. I thought to myself, “This is not a good sign.” Then I didn’t bother changing it.