“How many Overcoming Bias readers does it take to change a lightbulb?”
Actually it’s 3^^^3 + 1 (the first 3^^^3 have something in their eye).
“How many Overcoming Bias readers does it take to change a lightbulb?”
Actually it’s 3^^^3 + 1 (the first 3^^^3 have something in their eye).
“Then some combination of the party structure, and the media telling complicit voters who voters are likely to vote for, is exerting on the order of 14-15 bits of power over the Presidency; while the voters only exert 3-4 bits.”
I don’t buy this. The vast majority of random people would lose an election race against Hillary/Giuliani/etc even if the party structure and media supported them. So I would say many of those 14-15 bits are actually forced moves caused by good estimates of voter preference. Am I missing something? I’m having trouble thinking of a standard of nincompoophood according to which the average presidential candidate is a nincompoop but the average voter is not.
“I’m glad I didn’t do the “sensible” thing. Less blood on my hands.”
I tend to think that what matters is whether or not the blood is still in the body of the person it belongs to and not so much whose hands it’s on when it’s out.
I’m completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?
If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?
Few of these weirdtopias seem strangely appealing in the same way that conspiratorial science seems strangely appealing.
And yes, you can not only fit General Relativity into this paradigm, it actually comes out looking even more elegant than before.
Eliezer, do you realize the difference between Barbour’s treatments of classical mechanics and GR? In GR, he bases everything not just on relations between matter, but on relations between matter and space itself (at least its metric structure). When he calls his theory “relational” he is engaging in wordplay. The Pooley paper I linked in yesterday’s comments goes into gory philosophical detail on this.
I think some people (not including Eliezer) see that Barbour says “there is no time” and imagine that he invented the idea of a block universe (which I personally don’t see any philosophical problems with). But it’s everyone else who believes in block universes; Barbour’s universe is an unsorted-pile-of-block-slices universe. Barbour’s theory de-unifies space and time. Ouch!
Lee Smolin is one of the people behind relational QM, and he’s a naive Popperian. To me he’s the closest thing that physics has to a philosophical anti-authority.
I’m not Eliezer nor am I a pro, but I think I agree with Eliezer’s account, and as a first attempt I think it’s something like this...
When X judges that Y should Z, X is judging that Z is the solution to the problem W, where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it’s shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral.
“boreana”
This means “half Bolivian half Korean” according to urbandictionary. I bet I’m missing something.
Perhaps we should have a word (“mehtopia”?) for any future that’s much better than our world but much worse than could be. I don’t think the world in this story qualifies for that; I hate to be negative guy all the time but if you keep human nature the same and “set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair”, they still may abuse one another a lot physically and emotionally. Also I’m not keen on having to do a space race against a whole planet full of regenerating vampires.
Reality is that which, when you stop believing in it, doesn’t go away.
This is false, of course; with sufficiently advanced technology you could build a machine that read out your mind state and caused Earth to disappear once it determined you no longer believed in Earth. Doesn’t mean Earth was never real.
Math isn’t a language, mathematical notation is a language. Math is a subject matter that you can talk about in mathematical notation, or in English, etc.
Asking “What happened before the Big Bang?” is revealed as a wrong question. There is no “before”; a “before” would be outside the configuration space. There was never a pre-existing emptiness into which our universe exploded. There is just this timeless mathematical object, time existing within it; and the object has a natural boundary at the Big Bang. You cannot ask “When did this mathematical object come into existence?” because there is no t outside it.
This has been true of the standard (FRW) big bang models since, what, the 1920s?
Warning: Mach’s Principle is not experimentally proven, though it is widely considered to be credible.
I don’t see what experiments have to do with anything so long as we all agree GR is true. Apparently there are a lot of different things that people have called “Mach’s principle” and GR obeys some of them but not others: http://arxiv.org/PS_cache/gr-qc/pdf/9607/9607009v1.pdf . For example, it seems like you want to claim “Mach7” from this paper (“If you take away all matter, there is no more space”), which is false. It also seems like you want to claim “Mach10”, which is meaningless in GR. There’s a thing called “Goedel’s rotating universe”, so clearly there’s something subtle going on.
Is it your actual opinion that nuclear war between the US and USSR would have destroyed the world (or human civilization), or was that just a figure of speech? The distinction seems worth upholding.
I tend to agree with Eliezer-February-2007:
“If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.”
How many psychiatrists does it take to change a light bulb? One, but it has to want to change. How many Eliezer Yudkowskys does it take to change a light bulb? One, but it has to still want to change when it’s smarter, thinks faster, and is more like the light bulb it wants to be.
Eliezer,
“I don’t expect my own environment to be random noise, but that has nothing to do with witchcraft...”
I think I misinterpreted the math and now see what you’re getting at. Would it be an accurate translation to human language to say, “a sequence like 10101010 may favor witchcraft over the hypothesis that nothing weird is going on (i.e. the coinflips are random), but it will never favor witchcraft over the simpler hypothesis that something weird is going on that isn’t witchcraft”?
I find it awkward to think of “witchcraft” as just a content-free word; what “witchcraft” means to me is something like the possibility that reality includes human-mind-like things with personalities and with preferences that they achieve through unknown nonstandard causal means. If you coded that up, it would probably no longer be content-free; it would allow shortening the rest of the program generating the sequences in some cases and require lengthening it in some other cases. In all realistic cases the resulting program would still be longer than necessary.
Perhaps a benevolent singleton would cripple all means of transport faster than say horses and bicycles, so as to preserve/restore human intuitions and emotions relating to distance (far away lands and so on)?
Just to clarify, Hallq uses “mutual knowledge” as if it’s synonymous to “common knowledge”, but game theorists use the two terms as a contrast: mutual knowledge of A is when everyone knows A, common knowledge of A is when everyone knows that everyone knows that everyone knows (...) A. So this is about raising to common knowledge things that were merely mutual knowledge.
Eliezer, “more AIs are in the hurting class than in the disassembling class” is a distinct claim from “more AIs are in the hurting class than in the successful class”, which is the one I interpreted Yvain as attributing to you.
IMHO if anthropics worked that way and if the LHC really were a world-killer, you’d find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.
Awesome post, but somebody should do the pessimist version, rewriting various normal facets of the human condition as horrifying angsty undead curses.