Thanks! I worried for a while about changing my mind too much on the basis of one blog, and I still don’t agree with the Less Wrong consensus on everything, but overall I’ve found them very helpful. Anything specifically you would view with a skeptical eye?
KPier
Yes, many people value biodiversity (and it’s perfectly rational to do this). But I think the problem here is the same as the problem with worrying about global warming—yes, it’s likely a problem, but there are a lot of environmentalists, so the marginal utility of additional worrying is probably pretty much zero, unless you think there is something you can uniquely bring to the movement. There are a lot of things we could worry about, and only so much energy we can exert to change them...
Utilitarianism as most people practice it is not strictly about how many people live or die. Killing someone “fairly” and according to a predetermined set of rules presumably seemed better to the survivors than killing, say, whoever was least able to defend themselves, and worse to the survivors than eating someone who had died of natural causes, even though in all three cases the total number of survivors was the same.
I would certainly prefer the outcome (1 person dies, everyone else feels really bad about it) to the outcome (2 people die, everyone else doesn’t feel bad). I think most people would in this experiment, with a small group of survivors. But there exists a number of people involved (I’ll say it’s smaller than 3^^^3) so that I would prefer outcome A to outcome B.
Therefore, considerations beside death could theoretically be significant, and utilitarianism can’t be simplified to body count.
Can’t speak as a parent, but speaking as a child:
Read to them. Lots, about everything. Confident readers will find school easier, enjoy learning outside of school, and stop them from associating reading with school and work. Get a set of children’s encyclopedias, if you can find good ones.
Do science with them. If they ask you how something works, ask them how they could figure it out. My little brother thought heavier things would fall faster: we went outside and tried it. Then we watched the YouTube video of the feather-hammer experiment on the moon.
Answer their questions, even when they have a million of them. Better yet, get them to figure out the answers to their own questions.
Watch commercials with them, once they’re old enough to watch tv. Explain how the commercials try to trick people (this is the easiest way to introduce biases, but don’t call them that.)
If they’ve won an argument, tell them so, and tell them why. If you win an argument, tell them why. “Go to bed because I said so” is unhelpful. “Go to bed because when you don’t, you’re tired the next day and won’t have any fun.” is helpful. You can (and should) make them go to bed.
Don’t tell them to “be rational”. Show them what it actually looks like. When you make decisions, explain your thought process to them, even if you have to oversimplify. When they make decisions, ask them about theirs.
Remember, if you’re even thinking about this, you’re ahead of 99% of the planet. Kids usually manage to turn out okay.
What sort of substantiation were you looking for? The article pretty clearly states that the claims about the effects of the camp were based on exit surveys, and that the impact of the camp is demonstrated by the projects the camp grads are now working on. You could debate whether those are good measures, but we don’t exactly have better ones.
I agree it would be nice if we could come up with standardized tests for rationality and then test whether camp attendance improves scores, but even if this were possible it hardly seems the best conceivable use of SIAIs resources.
- 30 Aug 2011 3:44 UTC; 12 points) 's comment on Help Fund Lukeprog at SIAI by (
upvoted, for providing specific constructive suggests for SIAI. I hope no one will meet requests such as these with derision, and I’m not entirely sure why you’d expect that.
I am not signed up for cryonics.
1) I think I can save more lives by being an organ donor. 2) I can’t afford it, even with life insurance. 3) If there is a Singularity, I expect it will happen before I die anyway.
I can’t actually sign up until I’m 18 even if all these are refuted, but I will precommit to signing up when I’m old enough.
1) Statistics on this are almost impossible to find, with lots of websites declaring that you can save 100 lives without any substantiation. If there are any studies of average lives saved per donor, I haven’t been able to find them. Saving 100 people another way would be prohibitively expensive, but I’m not convinced those numbers are right. 2) This is my biggest hang-up. It’s hard to get a loan without a steady job, and most people I know won’t loan me money for something they think is crazy. At least for the next 30 years, my chances of dying where cryonics would be an option are pretty small. When does it stop being OK to wait? 3) Now that you point it out, this is more a excuse-to-stop-thinking than an answer. It’s easier not to worry about whether I could sign up for cryonics when I can downsize the expected impact by a factor of 100, but you’re right—even a 1% chance would still be worth it.
1) I find GiveWell’s analysis very convincing on the question of which charity to donate to; they estimate it costs between $500 and $1000 to save a life with Village Reach. What I can’t seem to find is how many lives I would save by becoming an organ donor—if GiveWell has reported on this, I can’t find it (and it seems outside their scope).
2) I’m taking a look at this. It appears to be nearly impossible to buy life insurance when under 18, but I’ll keep looking.
1) I value my life more than the lives of 4-8 strangers, as demonstrated by the fact I haven’t committed suicide to donate my organs. Based on the reading I have done so far, I can’t realistically assign cryonics a greater than 10% chance of actually working, so the question is whether I value my life (discounted by a factor of 10) more than the lives of 4-8 strangers, which I don’t. If Omega told me cryonics was guaranteed to work, I would sign up.
2) Making money without a high school degree, special skills, or Eliezer-level intelligence is more difficult than I think most highly-trained people realize. I’ll PM you, though.
3) I would assign a very high probability to a Singularity within my lifetime; I would also say I am 85% confident that if the Singularity does not happen in my lifetime, it will not happen. If the 21st century closes without any of the advances we anticipate, that would dramatically increase my estimate that they are impossible.But I’ve conceded to Alexai that even discounting for all this, it is probably still worth it; if I can resolve the other issues I will sign up.
Doing this exercise has really forced me to clarify my thinking on this—you should try it. I did a little research and it looks like it would be less than $15 a month for me, which removes that objection—except for the fact I can’t buy life insurance until I’m 18.
I think you’re probably saving less than 1 life on average by being a donor.
U.S. websites tend toward overblown claims (100 lives saved per donor...) that have made it nearly impossible for me to figure this out. It appears there are 15,000 donors per year here, and around 28,000 lives saved, implying it’s more than 1 life per donor (but not 5 or 10, as I had assumed).
I am currently signed up to be a donor, and I’m not really trying to wiggle out so much as figure out which option is better.
Supporting cryonics as a younger person (whether by signing up or by supporting it from the sidelines) could result in earlier development of hypothermia and other related biochemical alternatives, eliminating the need for cryonics as we know it.
So you would recommend signing up at 16, even if my personal odds of dying now are pretty small?
Cryonics also doesn’t depend directly on a singularity, just either very good (compared to today) scan/emulate tech or very good cell repair tech.
What would you estimate as the probability of developing technology that will make cryonics work without a singularity?
The new front page is great, good work.
One nitpick: It seems to me that the Back to Less Wrong button on the wiki should take you to the main page, not the welcome page again.
Welcome! Encountering Less Wrong as a teenager is one of the best things that ever happened to me. One of the most difficult techniques this site can teach you, changing your mind, seems to be easier for younger people.
Not understanding half the comments on this blog is about standard, for a first visit to the site, but you aren’t stupid; if you stick with it you’ll be fluent before you know it. How much of the site have you read so far?
Yes, this is right. A better way of saying it might be: “Phlogiston”, as ancient chemists understood it, meant “that which makes stuff burn”. So saying “Phlogiston causes fire” is like saying “The stuff that makes things burn causes stuff to burn.” If you look at the second statement, phlogiston obviously doesn’t mean anything.
If you wanted to test the hypothesis “phlogiston causes stuff to burn” you really couldn’t, because phlogiston isn’t a proper explanation—there aren’t any conditions that would disprove it. If you want to even consider the hypothesis in the first place it has to make better predictions than other hypotheses.
I thought that just made theorists respond “So phlogiston must be lighter than air”. But you’re right, the article exaggerates the unfalsifiable, fails-to-constrain-expectations, fake-causality aspects of the theory and oversimplifies it a bit.
Good rationalists shouldn’t read Good and Real? Why not? Where is this argued?
Hello Less Wrong!
I’m 16, female, and a senior in high school. Before I started reading here, I was not particularly interested in math, science, or rationality (which I had never really heard of). I stumbled on Harry Potter and the Methods of Rationality in October, and fell in love immediately. I read through the whole story in one night, and finally made the leap to Less Wrong during Eliezer’s hiatus.
I started on Less Wrong by reading Mysterious Answers to Mysterious Questions and within three posts I realized that, for the first time in my life, I was surrounded by people significantly smarter than me. Some people would probably have been excited about that; I was terrified. I promised myself that I wouldn’t post—wouldn’t even create an account, to avoid the temptation of posting—until I had read all the sequences and understood everything everyone said.
In retrospect, that may have been setting the bar a little too high for myself, especially since seven more sequences were added while I was reading. I eventually revised my standard to “I will not comment until I’m sure I actually have something to add to a discussion, and until I understand the things I have read well enough to explain them convincingly to 4 of my friends.”
The fact that I had to set all of those hurdles for myself just to have the self-confidence to create an account probably tells you a little about myself—I’m not ordinarily insecure, but I was so excited to find something like this I was very worried about “messing it up”. I’ve now read about 90% of the sequences and 98% of everything posted on Less Wrong in the last few months, and understood almost all of it (the quantum physics and decision theory sequences still confuse me). I’m not sure “read everything before you start to contribute” is generally a good guideline for new visitors, but for me it was perfect. I changed my mind about a lot of important things along the way—if there’s enough interest, I may discuss this in a post about exposing more teenagers to rationality.
So, thank you all for this great site! I hope I’ll be able to contribute.