It’s not at all clear to me how it is that, if the many worlds theory is correct, I will experience myself not dying. Assuming the many worlds theory is accurate, in some worlds the versions of “me” that are present there will cease to exist and not feel anything anymore, and in others the versions of “me” will survive. The many worlds interpretation doesn’t tell me anything about which set of universe I (singular) am in.
MinibearRex
Once your calculator returns the result “even”, you assign 99% probability to the condition “Q is even”. Changing that opinion would require strong bayesian evidence. In this case, we’re considering hypothetical bayesian evidence provided by Omega. Based on our prior probabilities, we would say that if Omega randomly chose an Everett branch (I’m going with the quantum calculator, just because it makes vocabulary a bit easier), 99% of the time Omega would chose another Everett branch in which the calculator also read “even”. However, Omega seems to like messing with our heads, and so we can conclude that this is probably not the algorithm Omega used to generate this problem. Instead, Omega purposely searched for an example of that 1% of all possible worlds in which the calculator read “odd”. If we assume this behavior on the part of Omega, the bayesian weight of the evidence (the knowledge that there is at least one possible world in which the calculator reads “odd”) goes way down. It might be something, especially because we aren’t 100% certain of Omega’s motivations and algorithms, but it certainly wouldn’t be enough to adjust our prior probability all the way to a point below 50%.
Looking at the numbers always helps. Statistically speaking, the majority of public school teachers are liberals. However, in your geographic area, that probability seems like it would need to be revised downwards. Still, even if you use probabilities to draw conclusions about the world, that doesn’t mean that you’ll never be surprised.
The idea of space elevators have been around for a long time. The technology isn’t completely available at present, but (as far as I know) they aren’t too far off from reality. If there was some more effort and research put into that program, it could become a viable option fairly soon.
I think the primary reason why this Prometheus problem is flawed is that in Newcomb’s problem, the presence or absence of the million dollars is unknown, while in this Prometheus problem, you already know what Prometheus did as a result of his prediction. Think of a variation on Newcomb’s problem where Omega allows you to look inside box B before choosing, and you see that it is full. Only an idiot would take only one box in that scenario, and that’s why this analysis is flawed.
You are correct in saying that the technology isn’t here yet. I do think, though, that the Hero of Alexander claim is a bit hyperbolic. I would be surprised if we had inches of the necessary construction material, but I think part of the reason why it seems so far away is that there isn’t a major, concerted effort to do it yet. I’d say it sounds about as far off as the proposal to go to the moon did, before the US had even achieved earth orbit. Or perhaps, as far fetched as the theoretical scheme that the matter in the nucleus of atoms could be converted into energy, creating an incredibly powerful explosive did, before there was a major push for that. A space elevator is a theoretical idea at present, but when there is the funding and the effort behind a technological development, it can happen faster than we typically think. I’m definitely not expecting a space elevator within the decade. But I’d be surprised if it wasn’t possible in my lifetime.
It’s difficult to craft a Utilitarian argument for stealing his wallet. The only easy way to do so would be if the money went to charity.
That being said, I would probably still do so. As a rationalist, I know it’s not a positive action, but it would still give me (irrational) emotional enjoyment. Plus, you get a great story out of it. Imagine being able to tell your friends that you ruined Hitler’s evening.
I think in the classic Newcomb’s problem, because Omega is a superintelligence and an astonishingly accurate predictor of human behavior, you have to assume that Omega predicted every thought you have, including that one. For that reason, we’re assuming that it’s just about impossible for you to “trick” Omega. However, if you know, for a fact, that both boxes are filled, then you know exactly what Omega modeled you doing. That doesn’t mean that you have to do it. At this point, it is possible to trick Omega. Taking both boxes just means that Omega made a mistake about what you’d do.
I’ve heard people argue, as you are, that rational agents should one box on transparent Newcomb’s, but I’ve never heard a good explanation for why they think that. Care to help me out?
The main issue is how intelligent the prisoner is. As it is, the prisoner used some clever logic to prove that he will not be executed that week, failing to consider the possibility that the judge will predict that. If he thought about it a bit more, he might realize that in fact the judge might well be anticipating that, and therefore, expect the hangman to come on any given day.
Then, if he kept thinking, it might occur to him that it is possible that the judge predicted that too, and so might not send the hangman. However, the judge is capable of making mistakes. He is human. So, the prisoner can come to the conclusion that he may well be hanged this week, even though it won’t be a surprise, or that the hangman will not come, and in fact the judge has predicted him perfectly.
This paradox is only confusing (from the prisoner’s standpoint) if you consider the judge to be infallible. He’s not. If the judge were Omega, on the other hand, we might run into some problems.
Well I just had an interesting opportunity to try out some of these techniques, because I was supposed to be working on a project and decided to “take a break” by reading Less Wrong. These techniques do seem to be helping.
I am a little bit leary of the first section, about trying to increase your own optimism. In general, I’m a little suspicious of trying to get myself to feel something that may not be justified. Fortunately, in my own case, I do know that I am perfectly capable of completing my current goal. I’ve done harder things.
Thank you, that is helpful. I still have a slight problem with it, though. In the classic Newcomb’s problem, I’m in a state of uncertainty about Omega’s prediction. Only when I actually pick up either one box or two can I say with confidence what Omega did. At the moment that I pick up Box B, I do know that I am leaving behind $1000 in Box A. At this point, I might be tempted to think that I should grab that box as well, since I already “know” what’s inside of it. The problem is that Omega probably predicted that temptation. Because I don’t know Omega’s decision while I’m considering the problem, I can’t hope to outsmart it.
I would argue, though, that getting $1,001,000 out of Newcomb’s problem is better than getting $1,000,000. If there’s a way to make that happen, a rational agent should pursue it. This is only really possible if you can outsmart Omega, which does seem like a very difficult challenge. It’s really only possible if you can think one level further than Omega. In classic Newcomb’s, you have to presume that Omega is predicting every thought you have and thinking ahead of you, so you can’t ever assume that you know what Omega will do, because Omega knows that you will assume that and do differently. In transparent Newcomb’s, however, we can know what Omega has done, and so we have a chance to outsmart it.
Obviously, if we are anticipating being faced with this problem, we can decide to agree to only take one box, so that Omega fills it up with $1,000,000, but that’s not what transparent Newcomb’s is asking. In transparent Newcomb’s, an alien flies up to you and drops off two transparent boxes that contain between them $1,001,000. It doesn’t matter to me what algorithm Omega used to decide to do this. Rationalists should win. If I can outsmart Omega, and I have an opportunity to on transparent Newcomb’s, I should do it.
Firstly, you didn’t state whether you were considering public or private school. Most of my experience and knowledge is with public school, so some of my own points may not be applicable.
Finances: Public school would be cheaper than home schooling for you, but I don’t know about a cost comparison to private schools. In turns of a social impact, I have no real knowledge either way.
Behavioral: I knew a lot of kids who were homeschooled growing up. Most of their parents tried to get together with other homeschooled kids as often as possible, so that their kids had social interactions. Those individuals were pretty well adjusted. I knew of a few kids whose parents didn’t do that, and that weren’t nearly as social. Availability bias may be affecting my judgement on this. If you do decide to homeschool your kids, and you do decide to get together with other homeschooled kids, you should consider the people who are likely to be in that social group. You are not religious, but the vast majority of them will be. Your kids’ friends will probably not be learning about evolution. You may have to stave off conversion attempts. That’s a possible downside. With regards to extracurriculars, I knew a lot of homeschooled kids that did participate in (mostly scholastic) competitions between schools. Again, availability bias is skewing my own perception of how common this is, because that’s how I met most of them.
Child raising: Most homeschooled teenagers I knew weren’t big rebels. Part of that may be religious; most of them were frequently going to church youth groups and bible studies and had parents that were fairly strict. I don’t know how much connection that had to parents or religion, but those will probably be the children your kids are friends with.
Your wife: I tend to agree with your own take on this, assuming you can take care of teaching your kids all on your own.
Another point you didn’t mention: Education. Are your kids going to get a better education at home or at a conventional school? You are reading and commenting on less wrong, so I assume that rationalist methods of thinking are fairly important, and that you probably want your children educated in them. You can teach them that, as they grow older, but a public school probably won’t. You could still teach them those skills on the side, though, if you wanted to. Considering it the other way, I don’t know what quality of school is in your area, but how educated are you in each subject compared to the average teacher? By the time your kid gets to high school, their teachers would be individuals mostly with masters’ degrees in their respective fields. You may be good at the field you currently work in, but are you going to be as good at teaching the American revolution as a professional in the field of history? And as good at teaching chemistry as someone with an advanced degree in chemistry? A major advantage of having multiple teachers is the capability for specialization. Your kids are likely to get more knowledge in a public school, but might get better thinking skills with you.
Personally, I went to public school and did quite well. I was in all honors classes, which helped give me good thinking skills and knowledge. I definitely had complaints about the way the school system was run, but I did (eventually) develop good social skills because of the constant daily interaction with other people. However, my parents did teach me a lot outside of school. They bought me books, encouraged my own curiosity and originality, and did a pretty good job at finding a balance between discipline and liberalism. No matter what you choose to do, don’t forget that you will still be immensely important to your child’s education.
Happy to help.
With regards to the side note on religion, that sounds fairly similar to my own upbringing. My dad was fairly nonreligious, maybe deism is the right word. Haven’t talked about it with him all that much, but definitely not Christian. My mom, on the other hand, is quite religious. Not a fundamentalist, she’s a biologist and believes in evolution, etc, but still definitely gave me and my brother religion. I can’t say that that was fantastic, but I started being a rationalist as I transitioned from a Christian to an atheist. If your wife is raising your kids religious, they might yet get some benefits from it, even if it’s not what your wife intended. Emphasis on might, though.
I definitely would recommend holding off on proposing solutions, as soon as they have the basic background knowledge to understand it. Maps and territory, as you mentioned above, is also a good, foundational topic.
I use holding off on solutions many times each day, when I’m thinking about any of life’s little puzzles. That is one of the most useful lessons I’ve ever learned.
Making beliefs pay rent is something I would teach very early on.
The general idea of reductionism. The world, and most problems, can be broken down into smaller and smaller parts, which is often a useful problem solving tool.
Another very useful concept is positive bias. Teaching your brain to look for counterexamples as well as examples is an extremely important tool for determining the truth.
I think these, in general, are some of the most important topics to teach if you want people to start becoming rationalists. In terms of how to teach them, I would say that encouraging curiosity, and supporting a questioning mindset is fundamental. I also think that I learned most of the techniques of rationality in terms of the problem I was working on at the time. I’d read something on less wrong or in a book and see an immediate and specific application for the general technique that I’d just learned. If you’re teaching people in a robotics club, I’d say that you shouldn’t necessarily make a syllabus or anything like that, just wait until you see them working on something where a certain lesson in rationality might be applicable.
On a complete side note, in your introduction you mentioned that you’d used rationality to get a girlfriend. I’m actually planning to ask out a girl I know in the next day or two, and that caught my attention. I’m curious what you did, or how you went about doing that.
I’ve typically found that straight up “working out” is not very enjoyable. I still do it; when I wake up I typically do some basic calisthenics. That has done quite a bit for my fitness, but it’s not very fun. Exercise, for me, is more fun in some form of a game. I generally enjoy soccer, but don’t play it very often. The exercise I enjoy the most, however, is martial arts. Fencing is my personal favorite, mostly because of the intellectual component involved (my coach refers to it as physical chess). If you really want to enjoy exercising more, find a sport you enjoy.
There’s something that’s always bothered me about these kinds of utilitarian thought experiments.
First of all, I think it’s probably better to speak in terms of pain rather than torture. We can intelligently discuss trade offs like this in terms of, “I’m going to punch someone” or “I’m going to break someone’s leg. How much fun would it take to compensate for that?”. Torture is another thing entirely.
If you have a fun weekend, then you had an enjoyable couple of days. Maybe you gained some stories that you can tell for a month or two to your friends who weren’t with you. If it was a very fun weekend, you might have learned something new, like how to water ski, something that you’ll use in the future. Overall, this is a substantial positive benefit.
If you torture someone for half an hour, not even an entire weekend, it’s going to have a much larger effect on someone’s life. A person who is being tortured is screaming in agony, flailing around, begging for the pain to stop. And it doesn’t. Victims of torture experience massive psychological damage that continues for long after the actual time of the act. Someone who’s tortured for half an hour is going to remember that for the rest of their lives. They may have nightmares about it. Almost certainly, their relationships with other people are going to be badly damaged or strained.
I’ve never been tortured. I’ve never been a prisoner of war, or someone who was trying to withhold information from a government, military, or criminal organization who wanted it. I have lived a pretty adventurous life, with sports, backpacking, rock climbing, etc. I’ve had some fairly traumatic injuries. I’ve been injured when I was alone, and there was nobody within earshot to help me. At those times, I’ve just lain there on the ground, crying out of pain, and trying to bring myself to focus enough to heal myself enough to get back to medical care. Those experiences are some of the worst of my life. I have an hard time trying to access those memories; I can feel my own mind flinching away from them, and despite all of my rationality, I still can’t fight some of those flinches. What I experienced wasn’t even all that terrible. They were some moderate injuries. Someone who was tortured is going to have negative effects that are ridiculously worse than what I experienced.
I’ve spent some time trying to figure out exactly what it is about torture that bothers me so much as a utilitarian, and I think I’ve figured it out, in a mathematical sense. Most utilitarian calculations don’t factor in time. It’s not something that I’ve seen people on less wrong tend to do. It is pretty obvious, though. Giving someone Y amount of pain for 5 minutes is better than giving them Y amount of pain for 10 minutes. We should consider not just how much pain or fun someone’s experiencing now, but how much they will experience as time stretches on.
Getting back to the original question, if I could give three or four people a very fun weekend, I’d punch someone. If I could give one person an extremely fun weekend, I’d punch someone. I’d punch them pretty hard. I’d leave a bruise, and make them sore the next day. But if I’m torturing someone for a month, I am causing them almost unimaginable pain for the rest of their life. X and N are going to have to be massive before I even start considering this trade, even from a utilitarian standpoint. I can measure pain and fun on the same scales, but a torture to fun conversion is vaguely analogous to comparing light years to inches.
If the main reason a small amount of torture is much worse than we might naively expect is that even small amounts of torture leave lasting, severe psychological damage, should we expect the disutility of torture to level off after a few (days/months/years)?
In other words, is there much difference between torturing one person for half an hour followed by weeks of moderate pain for that person and torturing one person for the same amount of weeks? The kind of difference that would justify denying, say, hundreds of people a fun weekend where they all learn to waterski?
I’m not sure what exactly you’re getting at with that specific example. I think that yes, torturing someone for weeks, followed by years of psychological pain is significantly worse than torturing someone for half an hour followed by weeks of (probably a bit less severe) psychological pain.
Your general point, however, I think definitely has some merit. Personally, I wouldn’t expect to see much psychological difference between an individual who was tortured for five years versus a person who was tortured for ten. I would definitely expect to see a larger difference between someone tortured for six years versus someone tortured for one. Certainly there’s a massive difference between 5 years and 0. There probably is some sort of leveling off factor. I don’t know exactly where it is, or what that graph would look like, but it probably exists, and that factor definitely could influence a utilitarian calculation.
If we’re talking about torture vs death, if we’re using preference utilitarianism, we can say that the point where the torture victim starts begging for death is where that dividing line can be drawn. I don’t know where that line is, and it’s not an experiment I’m inclined to try anytime soon.
That is a good point. If you could wipe their memories in such a way that they didn’t have any lasting psychological damage, that would make it significantly better. It’s still pretty extreme; a month is a long time, and if we’re talking about a serious attempt to maximize their pain during that time, there’s a lot of pain that we’d have to cancel out. X and N will still need to be very large, but not as large as without the drugs.
Lol. I’m inclined to agree with you there. However, considering that I’m writing this while I lay in bed with my foot propped up, having shattered a few bones during my last “weekend of extreme fun”, I’m beginning to reevaluate my priorities. ;)
If we’re looking at meta angles, reductionism itself is an obvious one, and is certainly necessary in the field of intelligence. Understanding an entire brain is a near impossible task if not broken up. Understanding a particular part of a brain that performs a particular function, while still very difficult, is more within reach. Understanding a sub component of that particular part is easier still. And so on and so forth.