Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.
So basically… (far) less than a 1% chance of saving ‘me’, but even then, I don’t have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I’m 18, these arguments do not hold equally well for people who are older than me.)
a disposition to be reasonably careful with my life
When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person’s “reasonably careful” is another person’s “needlessly risky.”
This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is ‘worth it’.
Ha, good times. :) But being careful with one’s life and being careful with one’s limb are too very different things. I may be stupid, but I’m not stupid.
Unless you’re wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I’m thinking of hard objects’ edges striking the head). Maybe my estimation is skewed by fictional evidence.
So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I’d be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly… 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don’t have the statistics on hand.
[It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you … facts =! values …]
Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.
Oh, should they? I’m the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).
Contra Roko, it’s OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)
This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.
For small probabilities, the weighted average calculation is dominated by the high-probability possibilities—if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can’t say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.
I wasn’t using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?
It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren’t).
It seems to me, given my preferences, about which I am not logically omniscient, [...]
I’d say your preference can’t possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?
Sorry, I should have been more clear: my preferences influence the possible interpretations of the word ‘save’. I wouldn’t consider surviving indefinitely but without my preferences being systematically fulfilled ‘saved’, for instance; more like damned.
Can you give me a number? Maybe we disagree because of differing probability estimates that cryo will save you.
Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.
So basically… (far) less than a 1% chance of saving ‘me’, but even then, I don’t have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I’m 18, these arguments do not hold equally well for people who are older than me.)
When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person’s “reasonably careful” is another person’s “needlessly risky.”
This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is ‘worth it’.
Ha, good times. :) But being careful with one’s life and being careful with one’s limb are too very different things. I may be stupid, but I’m not stupid.
Unless you’re wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I’m thinking of hard objects’ edges striking the head). Maybe my estimation is skewed by fictional evidence.
So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I’d be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly… 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don’t have the statistics on hand.
Good point about car risks. Sadly, I was considerably less cautious when I was younger—when I had more to lose. I imagine this is often the case.
How much far less? 0? 10^-1000?
[It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you … facts =! values …]
Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.
shouldn’t probabilities just be numbers?
i.e. just integrate over the probability distribution of what you think the probability is.
Oh, should they? I’m the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).
Contra Roko, it’s OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)
This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.
Ahhh, makes sense, thanks. In that case I’d put my best guess at around 1 in a million.
For small probabilities, the weighted average calculation is dominated by the high-probability possibilities—if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can’t say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.
I wasn’t using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?
It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren’t).
I’d say your preference can’t possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?
Sorry, I should have been more clear: my preferences influence the possible interpretations of the word ‘save’. I wouldn’t consider surviving indefinitely but without my preferences being systematically fulfilled ‘saved’, for instance; more like damned.