Until then, I’d be more interested in donating to general life extension research than paying for cryonics specifically.
This is very similar to my primary objection to cryonics.
I realize that, all factors considered, the expected utility you’d get from signing up for cryonics is extremely large. Certainly large enough to be worth the price.
However, it seems to me that there are better alternatives. Sure, paying for cryonics increases your chances of nigh-immortality by orders of magnitude. On the other hand, funding longevity research makes it more likely that we will ever overcome aging and disease. Unlimited life for most or all of the future human population is far more important than unlimited life for yourself, right? (One might object that life extension research is already on its way to accomplishing this regardless of your contributions, which brings me to my next point.)
If an existential risk comes to pass, then no one will have a chance at an unlimited life. All of the time and money spent on cryonics will go to waste, and life extension research will have been (mostly) squandered. Preventing this sort of risk is therefore far more important than preserving any one person, even if that person is you. To make matters worse, there are multiple existential risks that have a significant chance of happening, so the need for extra attention and donations is much greater than the need for extra longevity research.
To summarize: Cryonics gives you alone a far bigger chance of nigh-immortality. Working to prevent existential risk gives billions of people a slightly increased chance of the same.
It seems to me we shouldn’t be spending money on freezing vitrifying ourselves just in case a singularity (or equivalent scientific progress) happens. Instead, we should focus on increasing the chances that it will happen at all. To do anything else would be selfish.
Ok, time to take a step back and look at some reasons I might be wrong.
First, and perhaps most obviously, people are not inclined to donate all their money to any cause, no matter how important. I freely admit that I will probably donate only a small fraction of my earnings, despite the arguments I made in this post. Plus, it’s possible (likely?) that people would be more inclined to spend money on cryonics than on existential risk reduction, because cryonics benefits them directly. If someone is going to spend money selfishly, I suppose cryonics is the most beneficial way to do so.
Second, there’s a chance I misestimated the probabilities involved, and in fact your money would be best spent on cryonics. If the Cryonics Institute webpage is to be believed, the cheapest option costs $28,000, which is generally covered by insurance, costing you $120 per year (this option also requires a one-time payment of $1,250). Unfortunately, I have no idea how much $1,250 plus $120 per year would help if donated to SIAI or another such organization. Cryonics certainly give a huge expected reward, and I’m just guessing at the expected reward for donating.
Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article.
I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy.
The Wikipedia article timtyler linked to seems to support this: “Cromwell’s rule [...] states that one should avoid using prior probabilities of 0 or 1, except when applied to statements that are logically true or false.” This matches your analysis—you can only be certain of tautologies.
Also, your discussion of models neatly resolves the distinction between, say, a mathematically-defined die (which can be certain to end up showing an integer between 1 and 6) and a real-world die (which cannot quite be known for sure to have exactly six stable states).
Eliezer makes his position pretty clear: “So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.”
It’s true—you cannot ever reach a probability of 1 if you start at 0.5 and accumulate evidence, just as you cannot reach infinity if you start at 0 and add integer values. And the inverse is true, too—you cannot accumulate evidence against a tautology and bring its probability down to anything less than 1. But this doesn’t mean a probability of 1 is an incoherent concept or anything.
Eliezer: if you’re going to say that 0 and 1 are not probabilities, you need to come up with a new term for them. They haven’t gone away completely just because we can’t reach them.
Edit a year and a half later: I agree with the article as written, partially as a result of reading How to Convince Me That 2 + 2 = 3, and partially as a result of concluding that “tautologies that have probability 1 but no bearing on reality” is a useless concept, and that therefore, “probability 1″ is a useless concept.