Cryonics is an experiment. So far the control group isn’t doing very well.
Dr. Ralph Merkle (quoted on the Alcor website—I’m surprised this hasn’t been posted before, but I can’t find it in the past pages)
Cryonics is an experiment. So far the control group isn’t doing very well.
Dr. Ralph Merkle (quoted on the Alcor website—I’m surprised this hasn’t been posted before, but I can’t find it in the past pages)
There are real life examples where reality has turned out to be the “least convenient of possible worlds”. I have spent many hours arguing with people who insist that there are no significant gender differences (beyond the obvious), and are convinced that to assert otherwise is morally reprehensible.
They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong, that their morality just can’t cope with a world in which this turns out not to be true. There are many similar politically charged issues—Pinker discusses quite a few in the Blank Slate—where people aren’t wiling to listen to arguments about factual issues because they believe they have moral consequences.
The problem, of course—and I realise this is the main point of this post—is that if your morality is contingent on empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these differences do turn out to exist then you’ll say sexism is ok.
This is probably a test you should apply to all of your moral beliefs—if it just so happens that I’m wrong about the factual issue on which I’m basing my belief is wrong, will really I be willing to change my mind?
I think this post could do with some estimates of absolute risks.
According to the site you link to, there are 7476 deaths in traffic accidents for people in the 15-24 age range (NB—this presumably includes pedestrians, so is a massive overestimate of the deaths of people who were driving, but I’ll go with it for now).
In total, there were 21,859,806 males in your age group, so your probability of dying in a road traffic accident in any given year is approximately 0.0003. This translates to a risk per day of approximately 0.0000009.
Combining these numbers naively, the risk of being involved in a traffic accident on the first snowy day is approximately 0.0000009*1.14. In other words, your excess risk of dying by driving on the first snowy day is approximately 0.0000001. Even assuming that driving in snow is 10 times more dangerous than driving in normal conditions, this risk is 1 in 1 million. Is it really worth going out of your way to avoid driving on the first snowy day to avoid a 1 in 1 million increased chance of dying?
It is worth noting that as an avid transhumanist, I might well expect Michael to think that a 1 in 1 million increased chance of living as long as the Singularity, or dying in such a way as to allow his head to be frozen is probably worth quite a lot. But by revealed preference, most people in the US are only willing to pay around $10 to avoid a 1 in 1 million chance of dying (cf The Value of Statistcal Life), and so should probably only avoid driving on the first snowy day if they’d be willing to pay less than $10 to avoid the inconvenience.
The other examples could do with a similar analysis. A good way of thinking about it is how many fewer people would you expect to die if 1000 people took your advice.
Another point is that, even for the avid transhumanist, it seems unlikely that avoiding traffic accidents really is the best way of trying to live long enough to reach the Singularity—basically no-one dies before the age of 40 - a much more common cause of death among today’s 15-24 year olds than dying in an RTA is living until you’re 45 and then dying of Coronary Heart Disease, so you should probably look at optimising your lifestyle/diet to avoid that before you worry too much about getting an ipod cable for the car (although, actually, just running some quick numbers in my head, it seems like that one is likely to be a good investment). I note Dymtry has already made a comment along these lines.
Finally, some of the advice from other people that you’ve included in your bullet-pointed list is just terrible. Cycling is around 10 times more dangerous per passenger mile than driving. One anecdote which says that cycling might increase your ability to drive safely cannot possibly outweigh the massive evidence that says cycling is massively more likely to get you killed than driving is. Similarly, I would like to see some evidence that, say, driving safety courses actually help. Someone on the internet said so is not very convincing.
What’s my point?
First—thou shalt not report odds ratios. There is a wealth of literature on the subject which says that people make better decisions when present with absolute risk estimates than with odds ratios.
Second—it always pays to crunch some numbers. Car crashes are the most common cause of death among 15-24 year olds, but it is far from clear that steps taken to avoid car crashes are the best way for 15-24 year olds to extend their lifespan.
Third, [citation needed]. If you are compiling a list of advice like this, I think the onus is on you to make some effort to check that the advice you’re giving is useful, or at least to put a disclaimer saying that you haven’t. This could be useful resource, if it could be trusted.
Did the survey. I don’t know what cisgender means, but I assume that’s me, as I’m definitely not transgender...
Over the summer, Eliezer suggested (approximately, I am repeating this from memory) the following method for making an important decision:
write down a list of all of the relevant facts on either side of the argument.
assign numerical weights to each of the facts, according to how much they point you in one direction or another.
burn the piece of paper on which you wrote down the facts, and go with your gut.
This was essentially the method I used in coming to my (probably slightly low) estimate of the probability that Knox and Sollecito were innocent. It just felt like they were innocent, and I saw essentially no reason to suspect they were guilty. I will note that the ‘pro-guilt’ site that komponisto linked to was just horribly devoid of anything that I might consider evidence (if anything, that site did more to convince me of Knox’s innocence than the pro-innocence site), and I did spend probably about 10 minute trying to find some evidence that they had missed, but completely failed.
On a different not, as I said at the time, 0.95 and 0.05 were just proxies for “pretty damn sure” and “pretty damned unlikely”—I have very little idea what 5% probability feels like, and I’m sure that if arbitrary scientific convention had settled on some different number for significance, I’d have picked that one instead. I have made some progress since a year ago on calibrating my estimates of small probabilities, but I absolutely do not think that I would be wrong approximately 1 time in 20 when making predictions to which I assign a probability of 0.95.
Pretty much a corollary of this is Steve Landsburg’s (for some reason controversial) point that you should only ever be donating money to one charity at a time (unless you’re ridiculously rich). The charity which makes the best use out of your first $1 donation is almost certainly also the charity which makes the best use out of your 1000th dollar as well. Once you’ve done the calculation, spreading your money between different charities isn’t hedging your bets, it’s giving money to the wrong charity.
See his Slate article for a slightly more fleshed out version of the reasoning.
Do something about the “Help” link when writing comments.
A specific suggestion, change the link so it says “comment formatting”, but definitely do something to make it clearly where to find the formatting help.
I think the simple answer is probably no—people are just good at seeing patterns in data that is actually random. Testing the frequencies of these data against a Poisson distribution I get a p-value of something like 0.7. In other words, this looks exactly like it would if people were posting at random and independently of one another.
And 23-27 January is not Christmas where I come from…
The First 20 Hours (Josh Kaufman):
Practice something for 20 hours, and you’ll learn a lot. Don’t worry about feeling stupid/clumsy.
I don’t think it is an accurate reflection of the community. It certainly doesn’t reflect my experience with the LW communities in Toronto and Waterloo.
It is also not an accurate depiction of the community in London or Edinburgh (UK). However, I think it is pretty close to exactly what I would expect a tabloid summary of the Berkeley community to look like, based on my personal experience. The communities in Berkeley and NY really are massively different in kind to those pretty much anywhere else in the world (again, from personal experience).
And, as Kevin says, it is remarkably nice—they could have used exactly the same content to write a much more damning piece.
I agree with most of this, but I think you’re skipping a really important issue:
“There’s a big difference between dismissing that whole Lost Continent of Atlantis story, and prematurely dismissing it.”
Well, sure, but we need some way of deciding when our dismissal is premature. I mentioned this in the comments on Talking Snakes as well… there is certainly some room for the absurdity heuristic—my time is valuable, I can’t evaluate the evidence for every crank claim in the world (I know several academics who could easily spend their entire lives checking “proofs” that P=NP if they did this). I have to reject some of them out of hand—the issue is: which ones?
If someone tells me they’ve built a perpetual motion machine in the back garden from tin cans and elastic bands, I’m not going to waste even 10 minutes of my life trying to replicate it. If Steven Hawking tells me he’s built one, I’ll at least give the matter some consideration. The real issue is where we draw the line between claims which are too absurd to bother looking at the evidence for and those we should take the time out to evaluate, you don’t seem to have said anything yet that will help us to make that decision (perhaps you’re getting there).
So, everyone agrees that commuting is terrible for the happiness of the commuter. One thing I’ve struggled to find much evidence about is how much the method of commute matters. If I get to commute to work in a chauffeur driven limo, is that better than driving myself? What if I live a 10 minute drive/45 minute walk from work, am I better off walking? How does public transport compare to driving?
I suspect the majority of these studies are done in US cities, so mostly cover people who drive to work (with maybe a minority who use transit). I’ve come across a couple of articles which suggest cycling > driving here and conflicting views on whether driving > public transit here but they’re just individual studies—I was wondering if there’s much more known about this, and figured that if there is, someone here probably knows it. If no one does, I might get round to a more thorough perusal of the literature myself now I’ve publicly announced that the subject interests me.
If this actually works reliably, I think it is much more important than anything in either of the posts you used it to write—why bury it in a comment?
If you’re hiring, you’re probably better off not doing interviews.
My own experience strongly suggests to me that this claim is inane—and is highly dangerous advice… My personal experience from interviewing many, many candidates for a large company suggests that interviewing is crucial (though I will freely grant that different kinds of interviews vary wildly in their effectiveness).
The whole point of this article is that experts often think themselves better than SPR’s when actually they perform no better than SPRs on average. Here we have an expert telling us that he thinks he would perform better than an SPR. Why should we be interested?
I’m sorry, but you just don’t get a Bayes Factor of 10^40 by considering the alleged testimony of people who have been dead for 2000 years. There have to be thousands of things which are many orders of magnitude more likely than this that could have resulted in the testimony being corrupted or simply falsified.
You don’t even need to read the article to see that 10^39 is just a silly number, but for those interested, it is obtained by assuming that the probability of each of the disciples believing in the Resurrection is independent of the probabilities for the other disciples. Despite the fact that the independence assumption is clearly nonsense, and they themselves describe it as a “first approximation”, they then go on to quote this 10^39 figure throughout the rest of the article, and in the interview.
I’m sorry, but it’s this section where the paper just starts to get silly.
Well, ok, that does sound pretty unlikely. But is its improbability really even on the order of 10^39? Have the authors actually thought about what 10^39 means?
If you took every single person who has ever lived, and put them in a situation similar to the disciples every second for the entire history of the Universe, you wouldn’t even be coming close to 10^39 opportunities for them to make up such an elaborate plot. Are they really suggesting that it’s that unlikely?