Competed the survey. Thanks for doing this, the results are always interesting.
BlueSun
Something a Chess Master told me as a child has stuck with me:
How did you get so good?
I’ve lost more games than you’ve ever played.
-- Robert Tanner
I don’t mean to be rude but as an FYI:
At times, this evidence can be of critical importance. I can attest that I have personally saved the lives of friends on two occasions thanks to good situational awareness, and have saved myself from serious injury or death many times more.
Lowers my confidence of the post. Almost everyone I know has a story about how they almost died except for a moment of abnormal cunning or pure luck; yet I know few people who have died for reasons that would have been avoidable had they or someone around them been more observant. This suggests to me (since not everyone can be above-average observant or lucky) that in most of those stories, they didn’t have as high of a chance of death as they thought they did. It’s certainly possible that it’s not the case with you, but I’d prefer to either see the specific stories or maybe just use a less extreme example in the post. Or maybe it’s just me and no one else is bothered by it.
The “known knowns” quote got made fun of a lot, but I think it’s really good out of context:
“There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don’t know. But there are also unknown unknowns – there are things we do not know we don’t know.”
Also, every time I think of that I try to picture the elusive category of “unknown knowns” but I can’t ever think of an example.
- 1 Jun 2015 0:21 UTC; 3 points) 's comment on Rationality Quotes Thread June 2015 by (
- 2 Dec 2013 20:18 UTC; 0 points) 's comment on Rationality Quotes December 2013 by (
would you seriously, given the choice by Alpha, the Alien superintelligence that always carries out its threats, give up all your work, and horribly torture some innocent person, all day for fifty years in the face of the threat of a 3^^^3 insignificant dust specks barely inconveniencing sentient beings? Or be tortured for fifty years to avoid the dust specks?
Likewise, if you were faced with your Option 1: Save 400 Lives or Option 2: Save 500 Lives with 90% probability, would you seriously take option 2 if your loved ones were included in the 400? I wouldn’t. Faced with statistical people I’d take option 2 every time. But make Option 1: Save 3 lives and those three lives are your kids or option 2: Save 500 statistical lives with 90% probability I don’t think I’d hesitate to pick my kids.
In some sense, I’m already doing that. For the cost of raising three kids, I could have saved something like 250 statistical lives. So I don’t know that our unwillingness to torture a loved one is a good argument against the math of the dust specks.
I deconverted in large part because of Less Wrong. Looking back at it now, I hadn’t had a strong belief since I was 18 (by which I mean, if you asked most believers what the p(god) is they’d say 100% whereas I might have said 90%) but that might just be my mind going back and fixing memories so present me thinks better of past me.
I’d be happy to do an AMA (I went from Mormon to Atheist) but a couple of the main things that convinced me were:
Seeing that other apologists could make up similar arguments to make just about anything look true (for example, other religious apologists, homeopathy, anti-vaccines, etc)
Seeing the evidence for evolution and specifically, how new information supports true things. That showed me that for true things, new information doesn’t need to be explained away, but actually supports the hypothesis. For example, with evolution discoveries such as carbon dating, the fossil record, and DNA all support it. Those same discoveries have to be explained away via apologetics for religions.
Bayesian thinking. I have an econ background so kind of did this informally but the emphasis from less wrong that once you see evidence against you need to actively lower your probability a bit really helped me. Before I’d done what EY pointed out where you take all of your evidence for and stacked that against this one evidence against and then when the next evidence against comes along you take all your evidence for and stack it against that one evidence, etc.
The value that I want to believe what is true. I had this before but wasn’t as proactive about it.
Before I felt like my belief system was logical and fit the evidence and if someone didn’t believe it was because they hadn’t looked at the evidence and fairly considered it. Seeing people look at the evidence and then cogently explain why they still didn’t believe gave me a “I notice I’m confused” moment.\
etc.
The real life example here is electric utilities. The way they’re regulated they charge a kWh price roughly equal to the average total cost (let’s say about 12 cents). The proper way to price would be at the marginal cost (at around 4 cents). The fact that marginal costs are below average total costs are what makes them a natural monopoly.
The somewhat obvious better solution would be to charge marginal cost for each kWh and then have some other method to collect the massive fixed costs. But for whatever historic reasons, we don’t do that and most (all?) utilities price each kWh at about the average total cost. This means that as a society our quantity demanded kWh is way below where economic theory says it should be.
However, there is probably a fairly substantial pollution/CO2 externality to producing electricity. Without some analysis it isn’t obvious whether we’re producing too much electricity or too little.
I did try once to look at estimates of the size of the externality to see if it made up for the pricing way above marginal cost issue and the preliminary results were that the externality was smaller (meaning, global warming considered, we’re still not using enough electricity). However, there were a couple of points I’d need to get into deeper.
1) The pricing above marginal cost issue is greatest for residential rates and smallest for industrial rates. I was looking at residential rates. Using the same cursory analysis on industrial rates would mean that we’re over using electricity in industrial sectors.
2) The carbon externality number I used from the EPA seemed to be derived by figuring out how high the price of electricity would need to be to get usage down to the level they wanted. Under correctly priced utility rates (i.e., priced at the marginal costs), their analysis may have had a much higher $ / kWh externality number. But at the same time, I’m a little suspect of that method of calculating the externality as it would indicate if the cost of production halved it wouldn’t be optimal for society to produce more. So I’d need to do some more research to make sure I’m using good pollution/CO2 numbers.
I haven’t seen this issue discussed by people like Mankiw when they talk about the Pigou Club and I think it probably should be. If there’s interest I could probably write this up a bit more formally and make it a post.
Is there a thread somewhere about effective ways to plant the ‘rationalist seed’ in your children? I’d like to see something other than anecdotes ideally. But just ideas about books to read, shows to watch, or places to visit for different ages of children would be useful to me. For example,
My 2 and 4 year old both love Introductory Calculus For Infants
And a couple of years ago I got the the Star War ABC which lead to a HUGE love of Star Wars. I’m hoping that turns into a love of Science Fiction...
Great article, I have a particular fondness for this line of reasoning as it helped me leave my religious roots behind. I ended up reasoning that despite assurances that revelation was 100% accurate and to rely on it over any and all scientific evidences because they’re just “theories”, there was a x% chance that the revelation model was wrong. And for any x% larger than something like 0.001%, the multiple independent pieces of scientific, historic, and archaeological evidences would crush it. I then found examples of where revelation was wrong and it became clear that x% was close to what you’d expect from “educated guess.” And yes, I did actually work out all the probabilities with Bayes theorem.
Hmm, the microeconomics of outsourcing child production to countries with cheaper human-manufacturing costs… then we import them once they’re university aged? You know you’ve got a good econ paper going when it could also be part of a dystopia novel plot.
I’m thinking of it more like Minecraft in real life. I want a castle with a secret staircase because it would be awesome. What I did was spend a day of awesomeness building it myself instead of downloading it and only having five minutes of awesomeness.
How would I update my probabilities if I saw the opposite piece of evidence? What I’m trying to get at here is that “A” and “not A” can’t really be evidences for the same thing. And often it’s more obvious which way “not A” is pointing. A couple of examples:
I saw someone suggesting that maybe a certain Mr. Far Wright was secretly gay because, when the subject was broached, he had publicly expressed his dislike of homosexuality. There was even a wiki page (that I now can’t find) laying out the “law” that the more a person sounds like they hate gays the more likely they are to be gay. At first this sounded appealing*, but then I applied the “not A” test: “if Mr. Far Wright’s sexual orientation is unknown and I heard him publicly declare that he loved homosexual behavior, how would I update the probability that he is gay?” In that case, it seems clear that I’d update it towards him being gay. Therefore, it doesn’t really make sense that when Mr. Wright does that opposite—publicly declaring that he hates homosexual behavior—I also update my probability that he is gay.
Or another recent example I had from talking with someone about Mormonism. Someone said that not having the golden plates available for inspection wasn’t really evidence against Joseph Smith’s story because there were several good reasons why they weren’t available. I was about to concede when I realized that a world where the golden plates were observable would be strong evidence for Joseph Smith’s story so a world where they aren’t has to be at least weak evidence against his story. If A moves the probability quite a bit one way, not A has to at least minimally move the probability the other way.
*Sometimes, if all I can observe, is a denial, it is evidence that the person is guilty. For example, if I walked through the door and the first thing I heard was my toddler denying to my wife that he took the candy, it increases my probability that he did take candy. But too my wife—who already has the evidence that led her to make the accusation—a denial is evidence against him taking the candy (it increases the relative odds that his brother did it instead).
Did I keep all of my reasoning here correct? If not, there might be a better way to express the idea with a Bayesian network.
Company mission statements are notoriously abstract and might make a good starting place. If someone didn’t know anything about a company and they went and read the mission statement, they probably wouldn’t have a much better idea of what the company actually did.
For example, if (stereotypical) Grandpa asked you what Google was and you replied, “they organize the world’s information and make it universally accessible and useful” you probably wouldn’t do much to help him understand what Google is (despite that being one of the best mission statements I can think of). Instead, if you gave a specific example such as: “If you’re driving to a new store you can type in the store address and Google will print out a map of how to get there, along with detailed instructions. It’s more convenient than a traditional printed map because if you don’t know the address you can type in the store name and Google will tell you the address and show you pictures of the view from the street so you’ll be able to recognize it when you’re driving there.” Grandpa would probably have a better idea of what Google does.
So the activity would be to take a company mission statement (abstract) and come up with several examples of specific things that the company did that you could use to describe the company to your grandparents if they’d never heard of it before. The reason to start with the mission statement would be so that participants would be able to mentally contrast abstract statements (that wouldn’t help Grandma understand) with specific examples (that would help) and so hopefully learn to avoid making the abstract statements themselves. (For those participants who are Grandparents, they can use companies and products that no longer exist that younger people don’t understand and pretend they’re explaining it to their grand kids )
Some variation of “What is the other person’s actual objective?” Or “Why did they do that?” or “What are they actually asking me?”
I started this habit in chess where it’s always useful to ask ‘why did my opponent make their last move?’ (and then see if there are answers past the obvious one). But I’ve also found it useful in other areas. Several times at work I’ve gone through iterations of something with someone because I answered exactly what they said instead of what they actually wanted. I now try to stop and ask them what their actual purpose is and it often saves me a bit of work.
I really like this analysis a lot. For whatever it adds, Google Trends shows it peaking in July 2011, but mostly holding steady. There might be a small decline in the last six months though.
Just some feedback: I’m probably about average in math skill here (or maybe below average, the most math I’ve done is calculus 10 years ago) and with some work I’m able to get through some of this. When I first looked at it I didn’t understand anything but reading the wikipedia on VNM utility theorem and the always helpful List of Mathematical Symbols I was able to get through most of Lemma 1. I was able to prove it to my satisfaction using the solver in Excel and can follow most of the proof up until “Thus, the result follows”, I don’t see how it follows.
Are there any recommendations for slowly improving math skills other than just trying to work through things like this when time permits? Are people willing to host a Google Hangout where they walk through things such as this for those of us who are curious but have difficulty working it out all on our own (I know I probably could work it all out given enough time, but its hard to be motivated enough to make the time. When I first found the site, I didn’t know about Bayes theorem or any of the probability theory notation, but I saw its importance and so made sure to spend the time so I can follow it and work it out on my own when needed).
Here’s a good example of where I was fooled where I shouldn’t have been if I’d been thinking like a proper Bayesian. Prior to reading the article I would have given something like 1/1000 that computers could “solve” a main-line chess opening (to the definition given in the article, which is just that the computer evaluates each line as winning, not that every possible position has been examined). I’d also try to plug in reasonable numbers for newspapers reporting a story as true/false when the story is actually true/false as something like p(newspaper reports true given story is actually true)=95% and p(newspaper reports as true given story is actually false) =20%. Doing the math, there is then almost no chance that the article was true (less than 1%)
And I should have been able to do this in my head. Even if the newspaper reported true stories as true 99% of the time, and a false story as true only 1% of the time, there would have still been about 10 to 1 odds that it wasn’t true.
So why did I get fooled? I didn’t ever stop to think about it probably, which is embarrassing. Why not? I saw the link from MR and I apparently over-trust Tyler Cowen as a gatekeeper. Had a random person told me about the article I probably would have called BS on it (as I’ve done before with similar situations) but because someone I trust made the assertion I forgot to apply my brain filters, probably assuming he already did it.
Moral of the story, I need to always, at least briefly, think about my priors and how strong of evidence the source is when I learn new information. Especially if it comes from a source I trust because I’m more prone to believe it.
Thanks. I’d love to share this material with people but the format makes it hard as many people seem to have an aversion to a collection of blog posts. I look forward to buying the book so I can loan it to people.
I was going to point that out too as I think it demonstrates an important lesson. They were still wrong.
Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It’s quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there’s another path they’re not considering, we could still end up with the same result. Just like a Chess program doesn’t think like a human, but can still beat one and an airplane doesn’t fly like a bird, but can still fly.
I took it. Thanks for doing this every year, the results are very interesting.