Eliezer—thanks for putting this together. I had a lot of fun; made me nostalgic for my college days when I would talk about things that actually kept my brain awake. Thanks also for the AI textbook suggestion; I have placed an order for it and will give MIT’s (free) online undergrad class a shot.
LazyDave
When the movie Bob Roberts came out, I was pretty conservative in my politics, but I still found the movie incredibly funny. This is a testament to how good the movie was; my enjoyment/agreement ratio was quite high.
On a related note, I think that is why I have so much appreciation for this blog; I have never found a site that I disagree with so much yet still sincerely enjoy. Again, my enjoyment/agreement ratio is through the roof.
Not quite the same, but you may enjoy the following from The Onion, where April Fools Day is a year-round event: “Buoyant Force On Area Object Equal To Weight Of Water Displaced”
The book Eliezer suggested was Artificial Intelligence: A Modern Approach. The course I am going to take is offered here and here. The list of all courses is here. The course I am looking at is an undergrad one; I figure that will give me a good idea of where I want to go with AI, whether that be pursuing my Masters or some other route....
Caledonian—the problem is, while we cannot show that consciousness exists in anything besides ourselves; we KNOW it at least exists inside ourselves. We know it more than we know that the earth exists, or that there are physical laws, etc. But when it comes to entities other than ourselves, it may as well be phlogiston; we can make ZERO predictions that would confirm or deny its existence. This is what makes it qualitatively different from any other phenomenon out there.
I think this is the reason that some rationalists seem to find consciousness so disturbing; objective consequences are THE way to determine if something “exists,” except in the case of consciousness, and in that special case, the probability of it actually existing, at least for one person (namely, me) is 1.
I haven’t read Chalmers book, so I am just going by what I read here, but at the beginning of the post you promise to show the zombie world as logically impossible, but never deliver; you show that it is improbable enough to be perhaps be considered practically impossible, but since we are just dealing with a “thought experiment,” that is irrelevant. For example, I do not think that everyone around me is a zombie. In fact, I’d bet all the money I have that they aren’t. But I still don’t KNOW they aren’t, the way I KNOW that I am not.
On another note, I’m surprised at some of the ad hominem-type statements on this thread (people that don’t agree with are like creationists, people that don’t agree with me just don’t want to see the truth). On most blogs, it’s expected, but it is interesting to see it here.
Cool! I am REALLY looking forward to this. Even if I don’t end up grasping QM after this series, at least you are taking an honest shot at it. I can’t stand it when I try to ask someone (that allegedly knows this stuff) about QM and they come back with, “it is so strange you can’t even try to understand it, but here are the results of various QM experiments”.
So I guess I get how this works in theory, but in practice, doesn’t a particle going from A-B have SOME kind of effect that is different than if it went from B-C, even without the sensitive thingy? I don’t know if it would be from bouncing off other particles on the way, or having some kind of minute gravitational effect on the rest of the universe, or what. And if that is the case, shouldn’t the experiments always behave the as if there WERE that sensitive thingy there? Or is it really possible to set it up so there is literally NO difference in all the particle positions in the universe no matter which path is taken?
(This is a repost of a comment I made a few days ago under the topic “Distinct Configurations”, but if someone could address this, I would really appreciate it.)
So I guess I get how [configurations being the same as long as all the particles end up in the same place] works in theory, but in practice, doesn’t a particle going from A-B have SOME kind of effect that is different than if it went from B-C, even without the sensitive thingy? I don’t know if it would be from bouncing off other particles on the way, or having some kind of minute gravitational effect on the rest of the universe, or what. And if that is the case, shouldn’t the experiments always behave the as if there WERE that sensitive thingy there? Or is it really possible to set it up so there is literally NO difference in all the particle positions in the universe no matter which path is taken?
Nick—thanks for the link. I admit I tend to glaze over the comments as many of them are frankly over my head. I re-read yours and it makes more sense to me.
What Roland’s PS said :)
Eli—As you said in an earlier post, it is not the testability part of MWI that poses a problem for most people with a scientific viewpoint, it is the fact that MWI came after Collapse. So the core part of the scientific method—testability/falsifiability—gives no more weight to Collapse than to MWI.
As to the “Bayesian vs. Science” question (which is really a “Metaphysics vs. Science” question), I’ll go with Science every time. The scientific method has trounced logical argument time and time again.
Even if there turns out to be cases where the “logical” answer to a problem is correct, who cares if it does not make any predictions? If it is not testable, than it also follows you can’t do anything useful with it, like cure cancer, or make better heroin.
Caledonian—not sure if this is what was originally alluded to, but the Prisoner’s Dilemma / Tragedy of the Commons scenario is one where agents acting in their best interest get screwed. Of course, that is why we have governments in the first place (i.e. to get around those problems).
M—How do you figure Somalia is libertarian? Libertarianism requires a stable government (i.e. a monopoly on force) which Somalia definitely does not have.
H.A. - I don’t think the point was that Libertarians are more scientific than others, but that Libertarianism and Science are similar in the sense that they put more faith in processes than in people.
While we are (sort of) on the topic of cryonics, who here is signed up for it? For those that are, what organization are you with, and are you going with the full-body plan, or just the brain? I’m considering Alcor’s neuropreservation process.
“ME”—I’ve noticed that people on this forum seem to label ANYTHING that has to do with conditional probability “Bayesian”. I’m not quite sure why this is; I have a hard enough time figuring out the real difference between a “frequentist” and a “Bayesian”, but reading some of these posts I get the feeling that “Bayesian” around here means “someone who knows basic logic”.
I always thought that the justification for not revealing the transcripts in the AI box experiment was pretty weak. As it is, I can claim that whatever method Elizer used must have been effective for people more simple minded then me; ignorance of the specifics of the method does not make it harder to make that claim. In fact, it makes it easier, as I can imagine Eli just said “pretty please” or whatever. In any event, the important point of the AI box exercise is that someone reasonably competent could be convinced to let the AI out, even if I couldn’t be convinced.
One thing I would liked to have known is if the subjects had a different opinion about the problem once they let the AI out. One would assume they did, but since all they said was “I let Elizer out of the box” it is somewhat hard to tell.
If the reason for keeping it private is that he plans to do the trick with more people (and it doesn’t work if you know the method in advance) than it makes sense. But otherwise, I don’t see much of a difference between somebody thinking “there is no argument that would convince me to let him out” and “argument X would not convince me to let him out”. In fact, the latter is more plausible anyway.
In any event, I am the type of guy who always tries to find out how a magic trick is done and then is always disappointed when he finds out. So I’m probably better off not knowing :)
I have been seriously considering cryonics; if the MWI is correct, I figure that even if there is a vanishingly small chance of it working, “I” will still wake up in one of the worlds where it does work. Then again, even if I do not sign up, there are plenty of worlds out there where I do. So signing up is less of an attempt to live forever as it is an attempt to line up my current existence with the memory of the person who is revived, if that makes any sense. To put it another way, if there is a world where I procrastinate signing up until right before I die, the person who is revived will have 99.9% of the same memories as someone who did not sign up at all, so if I don’t end up signing up I do not lose much.
FWIW, I sent an email to Alcor a while ago that was never responded to, which makes me wonder if they have their act together enough to preserve me for the long haul.
On a related note, is there much agreement on what is “possible” as far as MWI goes? For example, in a classical universe if I know the position/momentum of every particle, I can predict the outcome of a coin flip with 1.0 probability. If we throw quantum events in the mix, how much does this change? I figure the answer should be in the range of (1.0 - tiny number) and (0.5 + tiny number).
Ben—remember that the original article referenced in point #32 stated that it was useful to have a word for something with traits A and B if (A correlates with B) OR (A,B correlates with something else, C). So even though green eyes do not positively correlate with dark hair, the combination does correlate with your desire.
I know this is basically repeating what others have already said, but I just wanted to stress that A nd B do not have to correlate.