Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?
This is a rather personal question, I know, but I’m very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?
I’m not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.
For what it’s worth, I’ve had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like “you should hold these beliefs, they will make your life happier” always sounded just insane—as crazy as “you should drink this liquor, it will make your commute less boring”. From that standpoint, it feels like what I have to protect is just the things I care about in the world—my own life, the lives of the people around me, the lives of humans in general.
This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn’t a value in its own right, it seems like a pretty good thing to practise for general use.
I’m trying to apply LW-style hyper-rationality to excelling in what I have left of grad school and to shepherding my business to success.
My mission (I have already chosen to accept it) is to make a pile of money and spend it fighting existential risk as effectively as possible. (I’m not yet certain if SIAI is the best target). The other great task I have is to persuade the people I care about to sign up for cryonics.
Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.
Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.
Alcor addresses some of the ‘spiritual’ objections in their FAQ. (“Whenever the soul departs, it must be at a point beyond which resuscitation is impossible, either now or in the future. If resuscitation is still possible (even with technology not immediately available) then the correct theological status is coma, not death, and the soul remains.”) Some of that might be helpful.
However, that depends on you being comfortable persuading people to believe what are probably lies (which might happen to follow from other lies they already believe) in the service of leading them to a probably correct conclusion, which I would normally not endorse under any circumstances, but I would personally make an exception in the interest of saving a life, assuming they can’t be talked out of theism.
It also depends on their being willing to listen to any such reasoning if they know you’re not a theist. (In discussions with theists, I find they often refuse to acknowledge any reasoning on my part that demonstrates that their beliefs should compel them to accept certain conclusions, on the basis that if I do not hold those beliefs, I am not qualified to reason about them, even hypothetically. Not sure if others have had that experience.)
OB then LW were the ‘step beyond’ to take after philosophy, not that I was seriously studying it.
to be honest I don’t think there’s much going on these day new-topic-wise, so I’m here less often. but I do come back whenever I’m bored, so at first “pure desire to learn” then “entertainment” would be my reasons ..
oh and a major part of my goals in life is formed by religion, ie. saving humanity from itself and whatever follows, this is more ideological than actual at this point in time, but anyway, that goal is furthered by learning more about AI/futurism, the rationality part less so, as I already had an intuitive grasp of it you could say, and really all it takes is reading the sequences with their occasional flaws/too strong assertions, the futurism part is more speculative-and interesting- so it’s my main focus, along with the moral questions it brings, though there is no dichotomy to speak of if you consider this a personal blog rather than book or something similar.
Yes, this is what I was curious about, thanks. I’ve seen others cite humanity’s existential risks as their motivations too (mostly uFAI, not as much nuclear war or super-flu or meteors). I’m like you in that for me it’s definitely a mix of learning and entertainment.
What do you have to protect?
Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?
This is a rather personal question, I know, but I’m very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?
I’m not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.
For what it’s worth, I’ve had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like “you should hold these beliefs, they will make your life happier” always sounded just insane—as crazy as “you should drink this liquor, it will make your commute less boring”. From that standpoint, it feels like what I have to protect is just the things I care about in the world—my own life, the lives of the people around me, the lives of humans in general.
That’s it.
This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn’t a value in its own right, it seems like a pretty good thing to practise for general use.
I’m trying to apply LW-style hyper-rationality to excelling in what I have left of grad school and to shepherding my business to success.
My mission (I have already chosen to accept it) is to make a pile of money and spend it fighting existential risk as effectively as possible. (I’m not yet certain if SIAI is the best target). The other great task I have is to persuade the people I care about to sign up for cryonics.
Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.
Alcor addresses some of the ‘spiritual’ objections in their FAQ. (“Whenever the soul departs, it must be at a point beyond which resuscitation is impossible, either now or in the future. If resuscitation is still possible (even with technology not immediately available) then the correct theological status is coma, not death, and the soul remains.”) Some of that might be helpful.
However, that depends on you being comfortable persuading people to believe what are probably lies (which might happen to follow from other lies they already believe) in the service of leading them to a probably correct conclusion, which I would normally not endorse under any circumstances, but I would personally make an exception in the interest of saving a life, assuming they can’t be talked out of theism.
It also depends on their being willing to listen to any such reasoning if they know you’re not a theist. (In discussions with theists, I find they often refuse to acknowledge any reasoning on my part that demonstrates that their beliefs should compel them to accept certain conclusions, on the basis that if I do not hold those beliefs, I am not qualified to reason about them, even hypothetically. Not sure if others have had that experience.)
OB then LW were the ‘step beyond’ to take after philosophy, not that I was seriously studying it.
to be honest I don’t think there’s much going on these day new-topic-wise, so I’m here less often. but I do come back whenever I’m bored, so at first “pure desire to learn” then “entertainment” would be my reasons ..
oh and a major part of my goals in life is formed by religion, ie. saving humanity from itself and whatever follows, this is more ideological than actual at this point in time, but anyway, that goal is furthered by learning more about AI/futurism, the rationality part less so, as I already had an intuitive grasp of it you could say, and really all it takes is reading the sequences with their occasional flaws/too strong assertions, the futurism part is more speculative-and interesting- so it’s my main focus, along with the moral questions it brings, though there is no dichotomy to speak of if you consider this a personal blog rather than book or something similar.
hope this helped :)
Yes, this is what I was curious about, thanks. I’ve seen others cite humanity’s existential risks as their motivations too (mostly uFAI, not as much nuclear war or super-flu or meteors). I’m like you in that for me it’s definitely a mix of learning and entertainment.