I know a really bad one which nearly turned my stomach: Some newspaper wrote “Survey uncovers that X’s have the property Y!” (I forget the details). I read the article and it turned out that, according to some survey, most people believe that X’s have the property Y. Argh!
Frank_Hirsch
I think the trouble about “Have you stopped beating your wife?” is that it is not about a state but about a state transition. It asks “10?”, and the answer “no” really leaves three possibilities open (including that the questionee has recently started beating his wife). The sentence structure implies a false choice between answers 10 and 11, because we are used to asking (and answering) yes/no questions about 1-bit issues while here we deal with a 2-bit issue. But you probably knew all that… =)
Just a small one, because I can’t hold it: You can’t judge the usefulness of a definition without specifying what you want it to be useful for. And now I’m off to bed… =)
botogol:
Eliezer (and Robin) this series is very interesting and all, but.… aren’t you writing this on the wrong blog?
I have the impression Eliezer writes blog entries in much the same way I read Wikipedia: Slowly working from A to B in a grandiose excess of detours… =)
I must say I found this rather convincing (but I might just be confirmation biased). Also, I have a question on the topic: The zombiists assume that the universe U of existing things is split into two exclusive parts, physical things P and epiphenomenal things E. The physical things P probably develop something like P(t+1)=f(P(t),noise), as we have defined that E does not influence P. But what does E develop like? Is it E(t+1)=f(P(t)[,noise]), or is it E(t+1)=f(P(t),E(t)[,noise])? I have somehow always assumed the first, but I do not remember having read it spellt out so unmistakeably.
• Sarah is hypnotized and told to take off her shoes when a book drops on the floor. Fifteen minutes later a book drops, and Sarah quietly slips out of her loafers. “Sarah,”, asks the hypnotist, “why did you take off your shoes?” “Well . . . my feet are hot and tired.”, Sarah replies. “It has been a long day”. • George has electrodes temporarily implanted in the brain region that controls his head movements. When neurosurgeon José Delgado (1973) stimulates the electrode by remote control, George always turns his head. Unaware of the remote stimulation, he offers a reasonable explanation for it: “I’m looking for my slipper.” “I heard a noise.” “I’m restless.” “I was looking under the bed.”
The point is: That’s how the brain works, always. It is only in special circumstances, like the ones described, that the fallaciousness of these “explanations from hindsight” becomes obvious.
Unknown: Well, maybe yeah, but so what? It’s just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That’s nothing at all to do with “overconfidence”, but it’s everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss it out of hand without thinking twice—sometimes I do not even take the time to hear them out! Why should I, when I know it’s gonna be a waste of time? Overconfidence? No, sanity!
[Warning: Here be sarcasm] No! Please let’s spend more time discussing dubious non-disprovable hypotheses! There’s only a gazillion more to go, then we’ll have convinced everyone!
I think the argument is misguided. Why? The choice is not only hypothetical but impossible. There is not the remotest possibility of a googolplex persons even existing.
So I’ll tone it down to a more realistic “equation”, then I’ll argue that it’s not an equation after all.
Then I’ll admit that I’m lost, but so are you… =)
Let’s assume 1e7 people experiencing pain of a certain intensity for one second vs. one persion experiencing equal pain for 1e7 seconds (approx. 19 years).
Let’s assume that every person in question has an expectancy of, say, 63 years of painless life. Then my situation is eqivalent to either extending the painless life expectency of 1e7 people from 63y-1s to 63y or to extend it for one person from 54y to 63y.
According to the law of diminishing returns, the former is definitely much less valuable than the latter.
But how much so? How to quantify this?
I have no idea, but I claim that neither do you… =)regards, frank
p.s.
I have a hunch that you couldn’t fit enough people with specks in their eyes into the universe to make up for one 50-year-torture.
[Laura ABJ:] While I think I have insight into why a lot of men might FAIL with women, that doesn’t mean I get THEM...
You are using highly loaded and sexist language. Why is it only the men who fail with the women? Canst thou not share in the failure, bacause thou art so obviously superior?
Wow, good teaser for sure! /me is quivering with anticipation ^_^
Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles:
Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.Admittedly, zombiism is not really irresistibly falsifiable… but that’s only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to indulge your need to believe that kind of stuff, and watch those silly rationalists try to disprove you. A great pastime for boring parties!
Also, the concept of identity is twisted beyond recognition by zombiism:
The psysical me causes the existence of something outside of the psysical me, which I define to be the single most important part of me. Huh?Btw, anyone to answer my question further above?
I asked: Can epiphenomenal things cause nothing at all, or can they (too, as can physical things can,) cause other epiphenomenal things?
Maybe Richard, as our expert zombiist, might want to relieve me of my ignorance?
Caledonian: Sure you do. That’s why we have biology and chemistry and neuroscience instead of having only one field: physics.
That’s just a matter of efficiency (as I have tried to illuminate). There is nothing about those high level descriptions that is not compatible with physics. They are often more convenient and practical, but they do not add one iota of explanatory power.
PK: I don’t see the ++ in your nice example, it’s perfectly valid C… =)
Caledonian, Ian C.: I know of no models of reality that have superior explanatory power than the standard reductionist one-level-to-bind-them-all position (apologies for the pun). So why add more? In a certain way “our maps [are] part of reality too”, but not in any fundamental sense. To simulate a microchip doing a FFT, it’s quite sufficient to simulate the physical processes in it’s logic gates. You need not even know what the chip is actually supposed to do. You just need a very precise description of the chip. If you do know what it’s doing, it’s of course much more efficient to directly use the same algorithm it is also using. That will also dramatically cut down on the length of it’s description. But that does not make the FFT algorithm fundamental in any way. It is just a way to look at what is happening. I mean, really, this shouldn’t be so hard to grasp...
Frank Hirsch: How do you propose to lend credibility to your central tenet “If you seem to have free will, then you have free will”?
Ian C.: I’m not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory. If you introspect on yourself making a decision, the process is not (as you would expect): consideration (of pros and cons) → decision → option selected. It is in fact: consideration → ‘will’ yourself to decide → knowledge of option chosen + memory of having chosen it. The knowledge that you chose is not worked out, it is just given to you directly. So their is no scope for you to err.
No scope to err? Surely you know that human memory is just about the least reliable source of information you can appeal to? Much of what you seem to remember about your decision process is constructed in hindsight to explain your choice to yourself. There is a nice anecdote about what happens if you take that hindsight away:
In an experiment, psychologist Michael Gazzaniga flashed pictures to the left half of the field of vision of split-brain patients. Being shown the picture of a nude woman, one patient smiles sheepishly. Asked why, she invents — and apparently believes — a plausible explanation: “Oh — that funny machine”. Another split-brain patient has the word “smile” flashed to his nonverbal right hemisphere. He obliges and forces a smile. Asked why, he explains, “This experiment is very funny”.
So much for evidence from introspective memory...
Nominull: I believe Eliezer would rather be called Eliezer...
Ian C.: We observe a lack of predictability at the quantum level. Do quarks have a free will? (Yup a shameless rip-off of Dougs argument, tee-hee! =) Btw. I don’t think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of “free will” that contradicts causality-plus-randomness at the physical level.
Oh, and the Liar Paradox makes much more sense once we overcome our obsession about recursion: If we take the equally valid stance of viewing it as an iteration, it is easy to see that the whole problem is that the proposition does not converge; that’s all there is to it.
If your mind contains the causal model that has “Determinism” as the cause of both the “Past” and the “Future”, then you will start saying things like, “But it was determined before the dawn of time that the water would spill—so not dropping the glass would have made no difference”.
Nobody could be that screwed up! Not dropping the glass would have been no option. =)
About all that free-will stuff: The whole “free will” hypothesis may be so deeply rooted in our heads because the explanatory framework of identifying agents with beliefs about the world, objectives, and the “will” to change the world according to these beliefs and objectives just works so remarkably well. Much like Newtons theory of gravity: In terms of the ratio of predictive_accuracy_in_standard_situations to operational_complexity Newton’s gravity kicks donkey. So does the Free Will (TM). But that don’t mean it’s true.
[Eliezer says:] And if you’re planning to play the lottery, don’t think you might win this time. A vanishingly small fraction of you wins, every time.
I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that “In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!”. I said: “Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.”. Although your assertion in the case about lottery is much weaker, I don’t believe it’s strictly true.
Okay, now let’s code those factory objects! 1 bit for blue not red 1 bit for egg not cube 1 bit for furred not smooth 1 bit for flexible not hard 1 bit for opaque not translucent 1 bit for glows not dark 1 bit for vanadium not palladium
Nearly all objects we encounter code either 1111111 or 0000000. So we compress all objects into two categories and define: 1 bit for blegg (1111111) not rube (0000000). But, alas, the compression is not lossless, because there are objects which are neither perfect bleggs nor rubes: A 1111110 object will be innocently accused of containing vanadium, because it is guilty by association with the bleggs, subjected to unfair kin liability! Still, in an enviroment where our survival depends on how faithfully we can predict unobserved features of those objects we stand good chances:
Nature: “I have here an x1x1x1x object, what is at it’s core?” We suspect a blegg and guess Vanadium—and with 98% probability we are right, and nature awards us a pizza and beer.
Now the evil supervillain, I-can-define-any-way-I-like-man (Icdawil-man, for short), comes by and says: “I will define my categories thus: 1 bit for regg (0101010) not blube (1010101)” While he will achieve the same compression ratio, he looses about 1⁄2 of the information in the process. He has failed to carve at the joint. So much the worse for Icdawil-man.
Nature: “I have here an x1x1x1x object, what is at it’s core?” Icdawil-man suspects a regg, guesses Palladium, and with 98% probability starts coughing blood...
Next along comes the virtuous and humble I-refuse-to-compress-man:
Nature: “I have here an x1x1x1x object, what is at it’s core?” Irtc-man refuses to speculate and is awarded a speck in his eye.
Next along comes the brainy I-have-all-probabilities-stored-here-because-I-can-man:
Nature: “I have here an x1x1x1x object, what is at it’s core?” Ihapshbic-man also gets a pizza and beer, but will sooner be hungry again than we will. That’s because of all the energy he needs for his humongous brain which comes in an extra handcart.
Any more contenders? =)