Just for the sake of devil’s advocacy:
4) You want to attribute good things to your ethics, and thus find a way to interpret events that enables you to do so.
Just for the sake of devil’s advocacy:
4) You want to attribute good things to your ethics, and thus find a way to interpret events that enables you to do so.
Also, I think I would prefer blowing up the nova instead. The babyeater’s children’s suffering is unfortunate no doubt but hey, I spend money on ice cream instead of saving starving children in Africa. The superhappies’ degrading of their own, more important, civilization is another consideration.
(you may correctly protest about the ineffectiveness of aid—but would you really avoid ice cream to spend on aid, if it were effective and somehow they weren’t saved already?)
Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.
This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes’ rule.
Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher under the hypothesis that the LHC would destroy the Earth if it did not fail, but you didn’t take into account the fact that the probability of survival is itself lower under that hypothesis (i.e. the anthropic principle).
I haven’t yet devised a way to express my appreciation of the orderliness of the universe, which doesn’t involve counting people in orderly states as compared to disorderly states.
What do you mean by that?
Frankly, I’m not sure what it is that you’re complaining about. Even in ordinary life humans have number ambiguity: if you split the connection between the halves of the brain, you get what seems to be two minds, but why should this be some great problem?
But unfortunately there’s that whole thing with the squared modulus of the complex amplitude giving the apparent “probability” of “finding ourselves in a particular blob”.
I hope you will at least acknowledge the existence of the point of view of Wallace/Saunders/Deutsch that the Born rule can be derived from quantum mechanics without it plus only very reasonable outside assumptions, if you won’t agree with it.
John Maxwell:
No, they are simply implementing the original plan by force.
When I originally read part 5, I jumped to the same conclusion you did, based presumably on my prior expectations of what a reasonable being would do. But then I read nyu2′s comment which assumed the opposite and went back to look at what the text actually said, and it seemed to support that interpretation.
Akon claims this is a “true” prisoner’s dilemma situation, and then tries to add more values to one side of the scale. If he adds enough values to make cooperation higher value than defecting, then he was wrong to say it was a true prisoner’s dilemma. But the story has made it clear that the aliens appear to be not smart enough to accurately anticipate human behaviour (or vice versa for that matter), so this is not a situation where it is rational to cooperate in a true prisoner’s dilemma. If it really is a true prisoner’s dilemma, they should just defect.
Of course, there may be a more humane approach than extermination or requiring them to live under human law: forcible modification to remove the desire to eat babies, and reduce the amount of reproduction. It might be a little tricky to do this without completely messing up the aliens’ psychology.
Also, it seems a little unlikely that a third ship would arrive given that the arrival of even one alien ship was considered so surprising in the first installment.
Gordon, humans respond in kind to hatred because we are programmed to by evolution, not because it is a universal response of all “ghosts”. But of course an AI won’t have compassion etc. either unless programmed to do so.
The last student should use a fuctional language. He’s right, the computer could easily be programmed to handle any order (as long as the IO is in the right sequence, and each variable is only assigned one value). So it’s reasonable for him to expect that it would be.
Michael: Eliezer has at least in the past supported coherent extrapolated volition. I don’t know if this is up-to-date with his current views.
I guess I was too quick to assume that mangled worlds involved some additional process. Oops.
Miguel: it doesn’t seem to be a reference to something, but just a word for some experience an alien might have had that is incomprehensible to us humans, analogous to humour for the alien.
Nick and Psy-Kosh: here’s a thought on Boltzmann brains.
Let’s suppose the universe has vast spaces uninhabited by anything except Boltzmann brains which briefly form and then disappear, and that any given state of mind has vastly more instantiations in the Boltzmann-brain only spaces than in regular civilizations such as ours.
Does it then follow that one should believe one is a Boltzmann brain? In the short run perhaps, but in the long run you’d be more accurate if you simply committed to not believing it. After all, if you are a Boltzmann brain, that commitment will cease to be relevant soon enough as you disintegrate, but if you are not, the commitment will guide you well for a potentially long time.
To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.
MIND IS FUNDAMENTAL AFTER ALL! CONSCIOUS AWARENESS DETERMINES OUR EXPERIMENTAL RESULTS!
You can still read this kind of stuff. In physics textbooks.
I hope this is just a strawman of the Copenhagen interpretation. If not, what textbooks are you reading?
Questions like “How are amplitudes converted to subjective probabilities?” are not automatically dictated by the theory
You might find this paper by David Deutsch interesting. Although, equation 14 bugs me, it seems to me |Psi_2> as defined doesn’t necessarily exist.
To say that the concepts of true and false do not apply to moral statements is not the same thing as saying that ethics is meaningless. For one thing, one can be committed personally to a particular ethical view without necessarily believing that there is any objective criterion by which it is superior to others. Also, ethics serve a real world purpose in co-ordinating the behaviour of agents with different goals; one can judge the efficacy of moral systems in fulfilling this purpose without necessarily either approving of it or making the mistake of confusing utility with truth.
Putting these together (actually the first is sufficient, but I threw the second one in anyway), one can both be in favor of some set of real world consequences, and judge moral systems on how well they promote these consequences (ie be a consequentalist) without making the mistake of attributing objective truth (or whatever) to the moral systems you therefore favor. There is thus no contradiction in being a consequentialist and denying the existence of any objective morality.
It’s evidence of my values which are evidence of typical human values. Also, I invite other people to really think if they are so different.
Eliezer tries to derive his morality from human values, rather than simply assuming that it is an objective morality, or asserting it as an arbitrary personal choice. It can therefore be undermined in principle by evidence of actual human values.
Strong enough to disrupt personal identity, if taken in one shot? That’s a difficult question to answer, especially since I don’t know what experiment to perform to test any hypotheses. On one hand, billions of neurons in my visual cortex undergo massive changes of activation every time my eyes squeeze shut when I sneeze—the raw number of flipped bits is not the key thing in personal identity. But we are already talking about serious changes of information, on the order of going to sleep, dreaming, forgetting your dreams, and waking up the next morning as though it were the next moment.
It sounds as if you believe in a soul (or equivalent) that is “different” for some set of possible changes and “the same” for other possible changes. I would suggest that that whether an entity at time n+1 is the same person as you at time n is not an objective fact of the universe. Humans have evolved so that we consider the mind that wakes up in the body of the mind that went to sleep to be the same person, but this intuitive sense is not an intuitive understanding of an objective reality; one could modify oneself to consider sleep to disrupt identity, and this would not be a “wrong” belief but just a different one.
I think most people are most comfortable retaining their evolution-given intuitions where they are strong, but where they are weak I think it is a mistake to try to overgeneralize them; instead one should try to shape them consciously. If you want to try being female for a while, why spoil your fun with hang ups about identity? Just decide that it’s still you.
I think what he means by “calibrated” is something like it not being possible for someone else to systematically improve the probabilities you give for the possible answers to a question just from knowing what values you’ve assigned (and your biases), without looking at what the question is.
I suppose the improvement would indeed be measured in terms of relative entropy of the “correct” guess with respect to the guess given.
You have another inconsistency as well. As you should have noticed in the “How many” thread, the assumptions that lead you to believe that failures of the LHC are evidence that it would destroy Earth are the same ones that lead you to believe that annihilational threats are irrelevant (after all, if P(W|S) = P(W), then Bayes’ rule leads to P(S|W) = P(S)).
Thus, given that you believe that failures are evidence of the LHC being dangerous, you shouldn’t care. Unless you’ve changed to a new set of incorrect assumptions, of course.
It might make an awesome movie, but if it were expected behaviour, it would defeat the point of the injunction. In fact if rationalists were expected to look for workarounds of any kind it would defeat the point of the injunction. So the injunction would have to be, not merely to be silent, but not to attempt to use the knowledge divulged to thwart the one making the confession in any way except by non-coercive persuasion.
Or alternatively, not to ever act in a way such that if the person making the confession had expected it they would have avoided making the confession.