I don’t believe in the existence of morals, which is to say there is no “right” or “wrong” in the universe. However, I’ll still do actions that most people would rate “moral”. The reasons I do this are found in my brain architecture, and are not simple. Also, I don’t care about utilitarianism. One can probably find some extremely complex utility function that describes my actions, which makes everybody on earth a utilitarianist, but I don’t consciously make utility calculations. On the other hand, if morality is defined as “the way people make decisions”, then of course everybody is moral and morality exists.
I believe that “nothing is right or wrong”, but that doesn’t affect my choices much. There is nothing inconsistent with that.
Roko, morals are in the end arbitrary, and there is no “correct” moral code for the AI to choose. The AI can be programmed to generalize a moral code from all humans though.
You can have real X-Men, check out a discovery special about “real superhumans”. There was one guy who could withstand cold so well that the doctors thought it shouldn’t be possible. A single mutation sometimes does create significant changes (and in this case advantages).
If you believe that p-zombies are logically impossible, you’re claiming that when one does an atom simulation, and those atoms happen to form a human brain, then it creates a pathway to the consciousness-stuff, and not only that, but that consciousness-stuff has a precise, causal effect on your atom simulation. And not only that, but the effect amazingly changes the thought process using a protocol that evolution has just happened to choose! Pretty remarkable claim to me.
“3. Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things.”
Intuitively maybe, but in the epiphenomenalism you only have conscious experience of the ‘inward awareness’, and it is in reality a physical function which creates the experience, so the experience does not cause anything.
“4. The word “consciousness”, if it has any meaning at all, refers to that-which-is or that-which-causes or that-which-makes-me-think-I-have inward awareness.”
Your not using the correct definition for the zombie argument, therefore your point is invalid. Consciousness means in this context the sum of sensory experience.
“In worlds where it is impossible to measure a difference in principle, it shouldn’t have any impact on what’s the correct action to take, for any sane utility function.”
Wrong, since it may be possible to estimate the probability of being in a p-zombie world, or more generally the probability that such a difference exists.
“However, this will necessarily mean that they’re shown to refer to things that are actually measurable.”
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren’t “really” experiencing any pain. If it could be shown that I’m the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure.
“Actually, currently my brain isn’t particularly interested in the concepts some people call “qualia”; it certainly doesn’t assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn’t hold for my present-brain.”
Does your brain assume/think it creates sensory experiences (or what people often call consciousness)?
“We’ve already found the flaw.”
What exactly is the logical flaw you’ve found? The zombie argument tells among other things that there can be no test that will tell if a person is really conscious or just a zombie. You might “know” that you’re conscious yourself, but there can be no rational argument that proves this.
“What real reasons? I don’t see any.”
If Zombie Worlds are possible, we might be living in it and therefore there can be no argument that proves otherwise. Your brain assumes that you have qualia, but I make no such assumption.
If a theory does not make you ‘less confused’, it doesn’t mean that the theory is wrong or bad. It could just be that it is the way how the world really functions, that some things are truly unknowable. Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). Of course, it is always a good thing to try find flaws with the theory.
“Separately, there exists a not-yet-understood reason within normal physics why philosophers talk about consciousness and invent theories of dual properties.”. Minds also have the delusion of ‘free will’, so I don’t see that argument as a major one.
“But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.” It is irrational to reject an argument because it seems absurd. However, it is a good reason to study the argument to find flaws.
Your first argument is that zombie worlds might not actually be logically possible. Fine, it is a possibility, but if you accept that minds can work by computation and that zombie worlds are impossible, it would mean that certain algorithms cannot logically exist without some kind of consciousness popping up into existence.
Ironically, in the future we might learn that the first replicator required “chance” in the same order of magnitude.
90% of drivers can be better than the average.
Tom, I did not confuse ends and means.
The problem with Communism is timing. In the future (if we survive the next century) there will be enough technological progression to create essential Communism (no-one needs to work, everyone will have necessary resources to live incredible lives and so forth). Of course, we won’t call it Communism.
Computing power is also increasing exponentially.
Eliezer, are you unaware of the fact that biological evolution is only a subset from general evolution?
Even if biological evolution would allow one-generation mutation for “Storm abilities”, it would not equal that evolution doesn’t explain anything. Even if everything “is possible” in evolution, there can still be different probabilities for different outcomes. The probability for “Storm ability mutation” is non-zero.