I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).
“Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque ‘decision theoretic measure’ or ‘probability as preference’, et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of ‘yous’ that are living or dying, whatever ‘you’ think you are.”
This argument basically reduces to, once you remove the aura of philosophical sophistication, “we don’t really know whether death is bad, so we should worry less about death”. This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .
“If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying:”
“A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI3 than by spending that money on pursuing a small chance of eternal life.”
If you already donate more than 5% of your income or time to one of these organizations, I’ll buy that. Otherwise (and this “otherwise” will apply to the vast majority of LW commenters), it’s invalid. You can’t say “alternative X would be better than Y, therefore we shouldn’t do Y” if you’re not actually doing X.
“Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere”
Why? Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn’t signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.
“People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious ”
This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a “fringe subject” seriously are two entirely separate and to some extent mutually exclusive goals. Maintaining high epistemic standards internally requires that you call people on it if you think they are being stupid. Becoming friends with a person who sees you as a kook requires not telling them about every time they’re being stupid.
“Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.”
If people are having kids who they can’t afford (cryonics is extremely cheap; someone who can’t afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it’s a stupid decision is a separate question, but it probably is.
“One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways.”
99% of the world’s population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can’t see a credible argument for why it wouldn’t be the case a large percentage of the time.
This argument basically reduces to, once you remove the aura of philosophical sophistication, “we don’t really know whether death is bad, so we should worry less about death”.
No. It more accurately reduces to “we don’t really know what the heck existence is, so we should worry even more about these fundamental questions and not presume their answers are inconsequential; taking precautions like signing up for cryonics may be a good idea, but we should not presume our philosophical conclusions will be correct upon reflection.”
Alright, but I would argue that a date of 2050 is pretty damn late. I’m very much in the ‘singularity is near’ crowd among SIAI folk, with 2050 as an upper bound. I suspect there are many who would also assign a date much sooner than 2050, but perhaps this was simply typical mind fallacy on my part. At any rate, your 13% is my 5%, probably not the biggest consideration in the scheme of things; but your implicit point is correct that people who are much older than us should give more pause before dismissing this very important conditional probability as irrelevant.
If you already donate more than 5% of your income or time to one of these organizations, I’ll buy that. Otherwise (and this “otherwise” will apply to the vast majority of LW commenters), it’s invalid. You can’t say “alternative X would be better than Y, therefore we shouldn’t do Y” if you’re not actually doing X.
Maybe, but a major point of this post is that it is bad epistemic hygiene to use generalizations like ‘the vast majority of LW commenters’ in a rhetorical argument. You and I both know many people who donate much more than 5% of their income to these kinds of organizations.
Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn’t signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.
But I’m talking specifically about assuming that any given argument against cryonics is stupid. Yes, correct people when they’re wrong about something, and do so emphatically if need be, but do not assume because weak arguments against your idea are more common that there do not exist strong arguments that you should presume your audience does not possess.
This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a “fringe subject” seriously are two entirely separate and to some extent mutually exclusive goals.
If the atmosphere is primarily based on memetics and rhetoric, than yes; but if it is founded in rationality, then the two should go hand in hand. (At least, my intuitions say so, but I could just be plain idealistic about the power of group epistemic rationality here.)
If people are having kids who they can’t afford (cryonics is extremely cheap; someone who can’t afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it’s a stupid decision is a separate question, but it probably is.
It’s not a separate question, it’s the question I was addressing. You raised the separate question. :P
99% of the world’s population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can’t see a credible argument for why it wouldn’t be the case a large percentage of the time.
What about 99% of Less Wrong readers? 99% of the people you’re trying to reach with your rhetoric? What about the many people I know at SIAI that have perfectly reasonable arguments against signing up for cryonics and yet consistently contribute to or read Less Wrong? You’re not actually addressing the world’s population when you write a comment on Less Wrong. You’re addressing a group with a reasonably high standard of thinking ability and rationality. You should not assume their possible objections are stupid! I think it should be the duty of the author not to generalize when making in-group out-group distinctions; not to paint things as black and white, and not to fall into (or let readers unnecessarily fall into) groupthink.
This argument basically reduces to, once you remove the aura of philosophical sophistication, “we don’t really know whether death is bad, so we should worry less about death”. This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .
Death is bad. The question is whether being revived is good. I’m not sure whether or not I particularly care about the guy who gets unfrozen. I’m not sure how much more he matters to me than anyone else. Does he count as “me?” Is that a meaningful question?
I’m genuinely unsure about this. It’s not a decisive factor (it only adds uncertainty), but to me it is a meaningful one.
I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).
“Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque ‘decision theoretic measure’ or ‘probability as preference’, et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of ‘yous’ that are living or dying, whatever ‘you’ think you are.”
This argument basically reduces to, once you remove the aura of philosophical sophistication, “we don’t really know whether death is bad, so we should worry less about death”. This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .
“If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying:”
If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I’m only eighteen.
“A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI3 than by spending that money on pursuing a small chance of eternal life.”
If you already donate more than 5% of your income or time to one of these organizations, I’ll buy that. Otherwise (and this “otherwise” will apply to the vast majority of LW commenters), it’s invalid. You can’t say “alternative X would be better than Y, therefore we shouldn’t do Y” if you’re not actually doing X.
“Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere”
Why? Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn’t signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.
“People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious ”
This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a “fringe subject” seriously are two entirely separate and to some extent mutually exclusive goals. Maintaining high epistemic standards internally requires that you call people on it if you think they are being stupid. Becoming friends with a person who sees you as a kook requires not telling them about every time they’re being stupid.
“Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.”
If people are having kids who they can’t afford (cryonics is extremely cheap; someone who can’t afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it’s a stupid decision is a separate question, but it probably is.
“One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways.”
99% of the world’s population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can’t see a credible argument for why it wouldn’t be the case a large percentage of the time.
No. It more accurately reduces to “we don’t really know what the heck existence is, so we should worry even more about these fundamental questions and not presume their answers are inconsequential; taking precautions like signing up for cryonics may be a good idea, but we should not presume our philosophical conclusions will be correct upon reflection.”
Alright, but I would argue that a date of 2050 is pretty damn late. I’m very much in the ‘singularity is near’ crowd among SIAI folk, with 2050 as an upper bound. I suspect there are many who would also assign a date much sooner than 2050, but perhaps this was simply typical mind fallacy on my part. At any rate, your 13% is my 5%, probably not the biggest consideration in the scheme of things; but your implicit point is correct that people who are much older than us should give more pause before dismissing this very important conditional probability as irrelevant.
Maybe, but a major point of this post is that it is bad epistemic hygiene to use generalizations like ‘the vast majority of LW commenters’ in a rhetorical argument. You and I both know many people who donate much more than 5% of their income to these kinds of organizations.
But I’m talking specifically about assuming that any given argument against cryonics is stupid. Yes, correct people when they’re wrong about something, and do so emphatically if need be, but do not assume because weak arguments against your idea are more common that there do not exist strong arguments that you should presume your audience does not possess.
If the atmosphere is primarily based on memetics and rhetoric, than yes; but if it is founded in rationality, then the two should go hand in hand. (At least, my intuitions say so, but I could just be plain idealistic about the power of group epistemic rationality here.)
It’s not a separate question, it’s the question I was addressing. You raised the separate question. :P
What about 99% of Less Wrong readers? 99% of the people you’re trying to reach with your rhetoric? What about the many people I know at SIAI that have perfectly reasonable arguments against signing up for cryonics and yet consistently contribute to or read Less Wrong? You’re not actually addressing the world’s population when you write a comment on Less Wrong. You’re addressing a group with a reasonably high standard of thinking ability and rationality. You should not assume their possible objections are stupid! I think it should be the duty of the author not to generalize when making in-group out-group distinctions; not to paint things as black and white, and not to fall into (or let readers unnecessarily fall into) groupthink.
Death is bad. The question is whether being revived is good. I’m not sure whether or not I particularly care about the guy who gets unfrozen. I’m not sure how much more he matters to me than anyone else. Does he count as “me?” Is that a meaningful question?
I’m genuinely unsure about this. It’s not a decisive factor (it only adds uncertainty), but to me it is a meaningful one.