Actually, the last statement (about spankings instead of jails) doesn’t sound foolish at all. We abolished torture and slavery, we have replaced a lot of punishments with softer ones, we are trying to make executions painless and more and more people are against death penalty, we are more and more concerned about the well-being of ever larger groups (white men, then women, then other “races”, then children), we pay attention to personal freedom, we think inmates are entitled to human rights, and if we care more about preventing further misdeeds than punishing the culprit, jails may not be efficient. I doubt spanking will replace jail, but I’d bet on something along these lines.
Manon_de_Gaillande
Maybe the reason we tend to choose bet 2 over bet 1 (before computing the actual expected winnings) is not the higher probability to win, but the smaller sum we can lose (either we expect to lose or we can lose at worst, I’m not sure about that). So the bias here could be more something along the lines of status quo bias or endownment effect than a need for certainty.
I can only speak for myself, but I do not intuitively value certainty/high probability of winning, while I am biased towards avoiding losses.
I think I’ve found one of the factors (besides scope insensivity) involved in the intuitive choice: in real life, a small amount of harm inflicted n times to one person has negative side-effects which don’t happen when you inflict it once to n persons. Even though there aren’t any in this thought experiment, we are so used to it we probably take it into account (at least I did).
You lost me there.
1) If Alice and Bob observe the system in your first example, and Alice decides to keep track precisely of X’s possile states while Bob just says “2-8”, the entropy of X+Y is 2 bits for Alice and 2.8 for Bob. Isn’t entropy a property of the system, not the observer? (This is the problem with “subjectivity”: of course knowledge is physical, it’s just that it depends on the observer and the observed system instead of just the system.)
2) If Alice knows all the molecules’ positions and velocities, a thermometer will still display the same number; if she calculates the average speed of the molecules, she will find this same number; if she sticks her finger in the water at a random moment, she should expect to feel the same thing Bob, who just knows the water’s temparature, does. How is the water colder? Admittedly, Alice could make it colder (and extract electricity), but she doesn’t have to.
What’s the bad thing that happens if I do 35? It’s a mistake, but how will it prevent me from using words correctly? I’d still be able to imagine a triangular lightbulb.
“Sure, someone else knows the answer—but back in the hunter-gatherer days, someone else in an alternate Earth, or for that matter, someone else in the future, knew what the answer was.”
I think the difference is that someone else knows the answer and can tell you.
If people do have a religion-shaped hole (I can tell at least some do), what are they supposed to do about it? Ignoring it to focus on real things will not plug the hole. Modifying your brain or creating a real godlike thing is not possible yet. So what are we to do?
“My curiosity doesn’t suddenly go away just because there’s no reality, you know!” Eliezer, I want to high-five you.
Does this “Many worlds” thing imply that there exists (in some meaningful sense) other worlds alongside us where whatever quantum events didn’t happen here happened? (If not, or if this is a wrong question, disregard the following.)
What are the moral implications? If some dictator says “If this photon passes through this filter (which it can do with probability 0.5), I will torture you all; if it is absorbed, I will do something vaguely nice.”, and the photon if absorbed, should we rejoice, or should we grieve for those people in another world who are tortured?
Should we try quantum suicide? I think I’m willing to die (at least once, but maybe not in a lot of worlds, my poor little brain can’t grasp the concept of multiple deaths) to let one world know whether the MWI is true.
What about other events? A coinflip isn’t really a quantum random event (and may even be not random at all if you know enough), but the coin is made out of amplitudes—are there worlds where the coin lands on the other side? We won WW2 by the skin of the teeth, are there any worlds where the Earth is ruled by Nazi Germany?
I don’t believe you.
I don’t believe most scientists would make such huge mistakes. I don’t believe you have shown all the evidence. This is the only explaination of QM I’ve been able to understand—I would have a hard time checking. Either you are lying for some higher purpose or you’re honestly mistaken, since you’re not a physicist.
Now, if you have really presented all the relevant evidence, and you have not explained QM in a way which makes some interpretation sound more reasonable than it is (what is an amplitude exactly?), then the idea of a single world is preposterous, and I really need to work out the implications.
I am not smarter than that. But you might (just might) be. “Eliezer says so” is strong evidence for anything. I’m too stupid to use the full power of Bayes, and I should defer to Science, but Eliezer is one of the few best Bayesian wannabes—he may be mistaken, but he isn’t crazily refusing to let go of his pet theory. Still not enough to make me accept MWI, but a major change in my estimate nonetheless.
As a side note, what actually happens in a true libertarian system is Europe during the Industrial Revolution.
Eliezer: “A little arrow”? Actual little arrows are pieces of wood shot with a bow. Ok, amplitudes are a property of a configuration you can map in a two-dimensional space (with no preferred basis), but what property? I’ll accept “Your poor little brain can’t grok it, you puny human.” and “Dunno—maybe I can tell you later, like we didn’t know what temperature was before Carnot.”, but a real answer would be better.
For some reason, this view of time fell nicely in place in my mind (not “Aha! So that’s how it is?” but “Yes, that’s how it is.”), so if it’s wrong, we’re a lot of people to be mistaken in the same way.
But that doesn’t dissolve the “What happened before the Big Bang?” question. I point at our world and ask “Where does this configuration come from?”, you point at the Big Bang, I ask the same question, and you say “Wrong question.”. Huh?
Why does the area under a curve equal the antiderivative? I’ve done enough calculus to suspect I somehow know the reason, but I just can’t quite pinpoint it.
Your main argument is “Learning QM shouldn’t change your behavior”. This is false in general. If your parents own slaves and you’ve been taught that people in Africa live horrible lives and slavery saves them, and you later discover the truth, you will feel and act differently. Yet you shouldn’t expect your life far away from Africa to be affected: it still adds up to normality.
Some arguments are convincing (“you can’t do anything about it so just call it the past” and “probability”), but they may not be enough to support your conclusion on their own.
kevin: Eliezer has written about that already. The AI could convice any human to let it out. See the AI box experiment ( http://yudkowsky.net/essays/aibox.html ). If it was connected to the Internet, it could crack the protein folding problem, find out how to build protein nanobots (to, say, build other nanobots), order the raw material (such as DNA strings) online) and convice some guy to mix it ( http://www.singinst.org/AIRisk.pdf ). It could think of something we can’t even think of, like we could use fire if we were kept in a wooden prison (same paper).
I’m surprised no one seems to doubt HA’s basic premise. It sure seems to me that toddlers display enough intelligence (especially in choosing what they observe) to make one suspect self-awareness.
I’m really glad you will write about morality, because I was going to ask. Just a data dump from my brain, in case anyone finds this useful:
Obviously, by “We should do X” we mean “I/We will derive utility from doing X”, but we don’t mean only that. Mostly we apply it to things that have to do with altruism—the utility we derive from helping others.
There is no Book of Morality written somewhere in reality like the color of the sky and about which you can do Bayesian magic as if it were a fact, though in extreme circumstances it can be a good idea. E.g., if almost everyone values human life as a terminal value and someone doesn’t, I’ll call them a psychopath and mistaken. Unlike facts, utility functions depend on agents. We will, if we are good Bayesian wannabes, agree on whether doing X will result in A, but I can’t see why the hell we’d agree on whether A is terminally desirable.
That’s a big problem. Our utility functions are what we care about, but they were built by a process we see as outright evil. The intuition that says “I shouldn’t torture random people on the street” and the one that says “I must save my life even if I need to kill a bunch of people to survive” come from the same source, and there is no global ojective morality to call one good and the other bad, just another intuition that also comes from that source.
Also, our utility functions differ. The birth lottery made me a liberal ( http://faculty.virginia.edu/haidtlab/articles/haidt.graham.2007.when-morality-opposes-justice.pdf ). It doesn’t seem like I should let my values depend on such a random event, but I just can’t bring myself to think of ingroup/outgroup and authority as moral foundations.
The confusing part is this: we care about the things we care about for a reason we consider evil. There is no territory of Things worth caring about out there, but we have maps of it and we just can’t throw them away without becoming rocks.
I’ll bang my head on the problem some more.
I’m pretty sure you’re doing it wrong here.
“What if the structure of the universe says to do something horrible? What would you have wished for the external objective morality to be instead?” Horrible? Wish? That’s certainly not according to objective morality, since we’ve just read the tablet. It’s just according to our intuitions. I have an intuition that says “Pain is bad”. If the stone tablet says “Pain in good”, I’m not going to rebel against it, I’m going to call my intuition wrong, like “Killing is good”, “I’m always right and others are wrong” and “If I believe hard enough, it will change reality”. I’d try to follow that morality and ignore my intuition—because that’s what “morality” means.
I can’t just choose to write my own tablet according to my intuitions, because so could a psychopath.
Also, it doesn’t look like you understand what Nietzsche’s abyss is. No black makeup here.
Caledonian: 1) Why is it laughable? 2) If hemlines mattered to you as badly as a moral dilemma, would you still hold this view?
Folks, we covered that already! “You should open the door before you walk trough it.” means “Your utility function ranks ‘Open the door then walk through it’ above ‘Walk through the door without opening it’”. YOUR utility function. “You should not murder.” is not just reminding you of your own preferences. It’s more like “(The ‘morality’ term of) my utility function ranks ‘You murder’ below ‘you don’t murder’.”, and most “sane” moralities tend to regard “this morality is universal” as a good thing.
This pressure exists once religion is already in place, but doesn’t explain why it appears and spreads.
However, selecting for cheats doesn’t matter, since they must teach their religion to their children in order to properly simulate faith. Moreover, I suspect that most people who didn’t actively choose their religion, but passively accepted it as children don’t fully believe it.