Good point: mainstream cryonics would be a big step towards raising the sanity waterline, which may end up being a prerequisite to reducing various kinds of existential risk. However, I think that the causal relationship goes the other way, and that raising the sanity waterline comes first, and cryonics second: if you can get the average person across the inferential distance to seeing cryonics as reasonable, you can most likely get them across the inferential distance to seeing existential risk as really flippin’ important. (I should take the advice of my own post here and note that I am sure there are really strong arguments against the idea that working to reduce existential risk is important, or at least against having much certainty that reducing existential risk will have been the correct thing to do upon reflection, at the very least on a personal level.) Nonetheless, I agree further analysis is necessary, though difficult.
that raising the sanity waterline comes first, and cryonics second:
But how do we know that’s the way it will pan out? Raising the sanity waterline is HARD. SUPER-DUPER HARD. Like, you probably couldn’t make much of a dent even if you had a cool $10 million in your pocket.
An alternative scenario is that cryonics gets popular without any “increases in general sanity”, for example because the LW/OB communities give the cryo companies a huge increase in sales and a larger flow of philanthropy, which allows them to employ a marketing consultancy to market cryo to market cryonics to exactly the demographic who are already signing up, where additional signups come not from increased population sanity, but from just marketing cryo so that 20% of those who are sane enough to sign up hear about it, rather than 1%.
I claim that your $10M would be able to increase cryo signup by a factor of 20, but probably not dent sanity.
Your original point was that “getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned”, in which case your above comment is interesting, but tangential to what we were discussing previously. I agree that getting people to sign up for cryonics will almost assuredly get more people to sign up for cryonics (barring legal issues becoming more salient and thus potentially more restrictive as cryonics becomes more popular, or bad stories publicized whether true or false), but “because then the public at large would have a reason to care about the future” does not seem to be a strong reason to expect existential risk reduction as a result (one counterargument being the one raised by timtyler in this thread). You have to connect cryonics with existential risk reduction, and the key isn’t futurism, but strong epistemic rationality. Sure, you could also get interest sparked via memetics, but I don’t think the most cost-effective way to do so would be investment in cryonics as opposed to, say, billboards proclaiming ‘Existential risks are even more bad than marijuana: talk to your kids.’ Again, my intuitions are totally uncertain about this point, but it seems to me that the option a) 10 million dollars → cryonics investment → increased awareness in futurism → increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars → any other memetic strategy → increased awareness in existential risk reduction.
a) 10 million dollars → cryonics investment → increased awareness in futurism → increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars → any other memetic strategy → increased awareness in existential risk reduction.
It is true that there are probably better ways out there to reduce x-risk than via cryo, i.e. the first $10M you have should go into other stuff, so the argument would carry for a strict altruist to not get cryo.
However, the fact that cryo is both cheap and useful in and of itself means that the degree of self-sacrificingness required to decide against it is pretty high.
For example, your $1 a day on cryo provides the following benefits to x-risk:
potentially increased personal commitment from you
network effects causing others to be more likely to sign up and therefore not die and potentially be more concerned and committed
revenue and increased numbers/credibility for cyro companies
potentially increased rationality because you expect more to actually experience the future
Now you could sacrifice your $1 a day and get more x-risk reduction by spending it on direct x-risk efforts (in addition to the existing time and money you are putting that way), BUT if you;re going to do that, then why not sacrifice another marginal $1 a day of food/entertainment money?
Benton house has not yet reached the level of eating the very cheapest possible food and doesn’t yet spend $0 per person per day on luxuries.
And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?
I think that there is another explanation: people are using extreme altruism as a cover for their own irrationality, and if a situation came up where they could either contribute net +$9000 (cost of cryo) to x-risk right now but die OR not die, they would choose to not die. In fact, I believe that a LW commenter has worked out how to sacrifice your life for a gain of a whole $1,000,000 to x-risk using life insurance and suicide. As far as I know, people who don’t sign up for cryo for altruistic reasons are not exactly flocking to this option.
(EDIT: I’ll note that this comment does constitute a changing argument in response to the fact that Will’s counterargument quoted at the top defeats the argument I was pursuing before)
And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?
I think the correct question here is instead “Do you really value a very, very small chance at you having been signed up for cryonics leading to huge changes in your expected utility in some distant future across unfathomable multiverses more than an assured small amount of utility 30 minutes from now?” I do not think the answer is obvious, but I lean towards avoiding long-term commitments until I better understand the issues. Yes, a very very very tiny amount of me is dying everyday due to freak kitchen accidents, but that much of my measure is so seemingly negligible that I don’t feel too horrible trading it off for more thinking time and half a Hershey’s bar.
The reasons you gave for spending a dollar a day on cryonics seem perfectly reasonable and I have spent a considerable amount of time thinking about them. Nonetheless, I have yet to be convinced that I would want to sign up for cryonics as anything more than a credible signal of extreme rationality. From a purely intuitive standpoint this seems justified. I’m 18 years old and the singularity seems near. I have measure to burn.
Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.
So basically… (far) less than a 1% chance of saving ‘me’, but even then, I don’t have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I’m 18, these arguments do not hold equally well for people who are older than me.)
a disposition to be reasonably careful with my life
When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person’s “reasonably careful” is another person’s “needlessly risky.”
This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is ‘worth it’.
Ha, good times. :) But being careful with one’s life and being careful with one’s limb are too very different things. I may be stupid, but I’m not stupid.
Unless you’re wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I’m thinking of hard objects’ edges striking the head). Maybe my estimation is skewed by fictional evidence.
So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I’d be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly… 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don’t have the statistics on hand.
[It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you … facts =! values …]
Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.
Oh, should they? I’m the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).
Contra Roko, it’s OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)
This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.
For small probabilities, the weighted average calculation is dominated by the high-probability possibilities—if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can’t say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.
I wasn’t using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?
It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren’t).
It seems to me, given my preferences, about which I am not logically omniscient, [...]
I’d say your preference can’t possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?
Sorry, I should have been more clear: my preferences influence the possible interpretations of the word ‘save’. I wouldn’t consider surviving indefinitely but without my preferences being systematically fulfilled ‘saved’, for instance; more like damned.
Good point: mainstream cryonics would be a big step towards raising the sanity waterline, which may end up being a prerequisite to reducing various kinds of existential risk. However, I think that the causal relationship goes the other way, and that raising the sanity waterline comes first, and cryonics second: if you can get the average person across the inferential distance to seeing cryonics as reasonable, you can most likely get them across the inferential distance to seeing existential risk as really flippin’ important. (I should take the advice of my own post here and note that I am sure there are really strong arguments against the idea that working to reduce existential risk is important, or at least against having much certainty that reducing existential risk will have been the correct thing to do upon reflection, at the very least on a personal level.) Nonetheless, I agree further analysis is necessary, though difficult.
But how do we know that’s the way it will pan out? Raising the sanity waterline is HARD. SUPER-DUPER HARD. Like, you probably couldn’t make much of a dent even if you had a cool $10 million in your pocket.
An alternative scenario is that cryonics gets popular without any “increases in general sanity”, for example because the LW/OB communities give the cryo companies a huge increase in sales and a larger flow of philanthropy, which allows them to employ a marketing consultancy to market cryo to market cryonics to exactly the demographic who are already signing up, where additional signups come not from increased population sanity, but from just marketing cryo so that 20% of those who are sane enough to sign up hear about it, rather than 1%.
I claim that your $10M would be able to increase cryo signup by a factor of 20, but probably not dent sanity.
Your original point was that “getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned”, in which case your above comment is interesting, but tangential to what we were discussing previously. I agree that getting people to sign up for cryonics will almost assuredly get more people to sign up for cryonics (barring legal issues becoming more salient and thus potentially more restrictive as cryonics becomes more popular, or bad stories publicized whether true or false), but “because then the public at large would have a reason to care about the future” does not seem to be a strong reason to expect existential risk reduction as a result (one counterargument being the one raised by timtyler in this thread). You have to connect cryonics with existential risk reduction, and the key isn’t futurism, but strong epistemic rationality. Sure, you could also get interest sparked via memetics, but I don’t think the most cost-effective way to do so would be investment in cryonics as opposed to, say, billboards proclaiming ‘Existential risks are even more bad than marijuana: talk to your kids.’ Again, my intuitions are totally uncertain about this point, but it seems to me that the option a) 10 million dollars → cryonics investment → increased awareness in futurism → increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars → any other memetic strategy → increased awareness in existential risk reduction.
It is true that there are probably better ways out there to reduce x-risk than via cryo, i.e. the first $10M you have should go into other stuff, so the argument would carry for a strict altruist to not get cryo.
However, the fact that cryo is both cheap and useful in and of itself means that the degree of self-sacrificingness required to decide against it is pretty high.
For example, your $1 a day on cryo provides the following benefits to x-risk:
potentially increased personal commitment from you
network effects causing others to be more likely to sign up and therefore not die and potentially be more concerned and committed
revenue and increased numbers/credibility for cyro companies
potentially increased rationality because you expect more to actually experience the future
Now you could sacrifice your $1 a day and get more x-risk reduction by spending it on direct x-risk efforts (in addition to the existing time and money you are putting that way), BUT if you;re going to do that, then why not sacrifice another marginal $1 a day of food/entertainment money?
Benton house has not yet reached the level of eating the very cheapest possible food and doesn’t yet spend $0 per person per day on luxuries.
And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?
I think that there is another explanation: people are using extreme altruism as a cover for their own irrationality, and if a situation came up where they could either contribute net +$9000 (cost of cryo) to x-risk right now but die OR not die, they would choose to not die. In fact, I believe that a LW commenter has worked out how to sacrifice your life for a gain of a whole $1,000,000 to x-risk using life insurance and suicide. As far as I know, people who don’t sign up for cryo for altruistic reasons are not exactly flocking to this option.
(EDIT: I’ll note that this comment does constitute a changing argument in response to the fact that Will’s counterargument quoted at the top defeats the argument I was pursuing before)
I think the correct question here is instead “Do you really value a very, very small chance at you having been signed up for cryonics leading to huge changes in your expected utility in some distant future across unfathomable multiverses more than an assured small amount of utility 30 minutes from now?” I do not think the answer is obvious, but I lean towards avoiding long-term commitments until I better understand the issues. Yes, a very very very tiny amount of me is dying everyday due to freak kitchen accidents, but that much of my measure is so seemingly negligible that I don’t feel too horrible trading it off for more thinking time and half a Hershey’s bar.
The reasons you gave for spending a dollar a day on cryonics seem perfectly reasonable and I have spent a considerable amount of time thinking about them. Nonetheless, I have yet to be convinced that I would want to sign up for cryonics as anything more than a credible signal of extreme rationality. From a purely intuitive standpoint this seems justified. I’m 18 years old and the singularity seems near. I have measure to burn.
Can you give me a number? Maybe we disagree because of differing probability estimates that cryo will save you.
Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.
So basically… (far) less than a 1% chance of saving ‘me’, but even then, I don’t have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I’m 18, these arguments do not hold equally well for people who are older than me.)
When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person’s “reasonably careful” is another person’s “needlessly risky.”
This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is ‘worth it’.
Ha, good times. :) But being careful with one’s life and being careful with one’s limb are too very different things. I may be stupid, but I’m not stupid.
Unless you’re wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I’m thinking of hard objects’ edges striking the head). Maybe my estimation is skewed by fictional evidence.
So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I’d be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly… 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don’t have the statistics on hand.
Good point about car risks. Sadly, I was considerably less cautious when I was younger—when I had more to lose. I imagine this is often the case.
How much far less? 0? 10^-1000?
[It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you … facts =! values …]
Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.
shouldn’t probabilities just be numbers?
i.e. just integrate over the probability distribution of what you think the probability is.
Oh, should they? I’m the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).
Contra Roko, it’s OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)
This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.
Ahhh, makes sense, thanks. In that case I’d put my best guess at around 1 in a million.
For small probabilities, the weighted average calculation is dominated by the high-probability possibilities—if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can’t say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.
I wasn’t using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?
It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren’t).
I’d say your preference can’t possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?
Sorry, I should have been more clear: my preferences influence the possible interpretations of the word ‘save’. I wouldn’t consider surviving indefinitely but without my preferences being systematically fulfilled ‘saved’, for instance; more like damned.
I like this turn of phrase.