Hello. I’m a typical geeky 20-something white male who’s interested in science and technology. I’m a Bachelor in economics and business. Not a native English speaker.
From the time I was 12 I’ve spent most of my time surfing around the internet reading about interesting things and generally wasted my time and being alone. A few years ago I was really depressed and had a plan for suicide. Once in a while I’ve done something actually useful. That’s my life in a nutshell.
I have always thought of myself as somewhat rational in the traditional sense when I’m not emotionally charged, but so do most of the people, I’d say. Who would be intentionally irrational?
When I first heard about LessWrong on 4chan/sci/ a few years ago, I heard only about negative things of it. I got the impression that this is basically some kind of daydreaming cult for people who are interested in the singularity and transhumanism. Like people just write about some things that sound kinda important and deep in a pop-science manner, but don’t want to do anything more quantifiable or exact or something that’s more difficult, like real science. I got the impression that it’s not something you’re supposed to take very seriously.
Okay, a few years go by, I start to be more interested in futurology and stuff. I stumble upon Luke Muehlhauser in his reddit AMA and the things he talked about in his AMA sounded kinda cool, something I’ve never really thought before and I read a few of his papers (Intelligence Explosion and Machine Ethics, Intelligence Explosion: Evidence and Import). After this I forget this thing again for a year until I read his book “Facing the intelligence explosion” in which he goes to lengths to talk about LessWrong so I decide to take a look.
So I read the sequence “How To Actually Change Your Mind” and there where some useful things to consider if I want to be neutral in the face of evidence and change my mind about things. This bayesian approach to rationality or whatever-it’s-called sounds pretty reasonable and I think I want to learn more of it. In the meantime I read Eliezer Yudkowsky’s HPMOR and “Cognitive Biases Potentially Affecting Judgement of Global Risk” and a few random LessWrong articles here and there. Sometimes Eliezer Yudkowsky sounds so full of himself, like he knows everything about everything, that’s it’s pretty annoying. His narcissism and self-proclaimed geniosity reminds me of Stephen Wolfram. But I like his optimism, he has really useful ideas to share about rationality and he’s good at writing.
I also started to think, that if these people are trying to be so rational then why do so many of them hold seemingly irrational beliefs about some things without much quantifiable evidence. I mean, I have a gut feeling that the singularity will probably happen at some point if there isn’t some societal collapse, but it’s far from certain and may not happen the same way FAI advocates anticipate. The event is so far in the future and there are so many factors related to it, so I’m not sure how well you can predict how it happens and say meaningful things about it. Someone here made a good remark about it:
Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.
Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future?
I also agree with many of the points raised in this post. I think the work MIRI is doing might be useful and I’m not against it, but I wouldn’t personally allocate my resources towards it at this point, at least not money. Karnofsky criticized that MIRI doesn’t take into account many variables he has considered, but on top of that there must be even MORE variables MIRI hasn’t taken into account.
There are many beliefs here that seem to be based on non-quantifiable hypotheses. You would think that if you took a bunch of rationalists who applied the methods of rationality correctly and were willing to change their minds about their beliefs, the likelihood that they had the same fringe-beliefs based on non-quantifiable evidence would be pretty small. Note: I don’t know everything about the community here, this is just from the little time I’ve spent here.
I hope MIRI, transhumanism, cryonics, polyamory etc. are not inherently connected to LessWrong and its approach to rationality?
I still have a cautiously positive view about this community. Even though I dislike some of these fringe opinions, I’m still interested in decision theory and in this kind of approach to rationality, which I don’t think is fringe at all and I’m willing to learn more about it. I’m kinda slow thinker and sometimes it feels when I’m around people that I’m less intelligent than others and it takes longer for me to process things than the people around me. By making good decisions I could minimize the impact of situations where my well-being depends wholly on quick thinking.
But I don’t except very much practical success and most of all, I think of this as a form of entertainment (“epiphany porn” as you like to call it) and when I have more important things to do, I will probably set this thing aside.
Hello. I’m a typical geeky 20-something white male who’s interested in science and technology. I’m a Bachelor in economics and business. Not a native English speaker.
From the time I was 12 I’ve spent most of my time surfing around the internet reading about interesting things and generally wasted my time and being alone. A few years ago I was really depressed and had a plan for suicide. Once in a while I’ve done something actually useful. That’s my life in a nutshell.
I have always thought of myself as somewhat rational in the traditional sense when I’m not emotionally charged, but so do most of the people, I’d say. Who would be intentionally irrational?
When I first heard about LessWrong on 4chan/sci/ a few years ago, I heard only about negative things of it. I got the impression that this is basically some kind of daydreaming cult for people who are interested in the singularity and transhumanism. Like people just write about some things that sound kinda important and deep in a pop-science manner, but don’t want to do anything more quantifiable or exact or something that’s more difficult, like real science. I got the impression that it’s not something you’re supposed to take very seriously.
Okay, a few years go by, I start to be more interested in futurology and stuff. I stumble upon Luke Muehlhauser in his reddit AMA and the things he talked about in his AMA sounded kinda cool, something I’ve never really thought before and I read a few of his papers (Intelligence Explosion and Machine Ethics, Intelligence Explosion: Evidence and Import). After this I forget this thing again for a year until I read his book “Facing the intelligence explosion” in which he goes to lengths to talk about LessWrong so I decide to take a look.
So I read the sequence “How To Actually Change Your Mind” and there where some useful things to consider if I want to be neutral in the face of evidence and change my mind about things. This bayesian approach to rationality or whatever-it’s-called sounds pretty reasonable and I think I want to learn more of it. In the meantime I read Eliezer Yudkowsky’s HPMOR and “Cognitive Biases Potentially Affecting Judgement of Global Risk” and a few random LessWrong articles here and there. Sometimes Eliezer Yudkowsky sounds so full of himself, like he knows everything about everything, that’s it’s pretty annoying. His narcissism and self-proclaimed geniosity reminds me of Stephen Wolfram. But I like his optimism, he has really useful ideas to share about rationality and he’s good at writing.
I also started to think, that if these people are trying to be so rational then why do so many of them hold seemingly irrational beliefs about some things without much quantifiable evidence. I mean, I have a gut feeling that the singularity will probably happen at some point if there isn’t some societal collapse, but it’s far from certain and may not happen the same way FAI advocates anticipate. The event is so far in the future and there are so many factors related to it, so I’m not sure how well you can predict how it happens and say meaningful things about it. Someone here made a good remark about it:
I also agree with many of the points raised in this post. I think the work MIRI is doing might be useful and I’m not against it, but I wouldn’t personally allocate my resources towards it at this point, at least not money. Karnofsky criticized that MIRI doesn’t take into account many variables he has considered, but on top of that there must be even MORE variables MIRI hasn’t taken into account.
There are many beliefs here that seem to be based on non-quantifiable hypotheses. You would think that if you took a bunch of rationalists who applied the methods of rationality correctly and were willing to change their minds about their beliefs, the likelihood that they had the same fringe-beliefs based on non-quantifiable evidence would be pretty small. Note: I don’t know everything about the community here, this is just from the little time I’ve spent here.
I hope MIRI, transhumanism, cryonics, polyamory etc. are not inherently connected to LessWrong and its approach to rationality?
I still have a cautiously positive view about this community. Even though I dislike some of these fringe opinions, I’m still interested in decision theory and in this kind of approach to rationality, which I don’t think is fringe at all and I’m willing to learn more about it. I’m kinda slow thinker and sometimes it feels when I’m around people that I’m less intelligent than others and it takes longer for me to process things than the people around me. By making good decisions I could minimize the impact of situations where my well-being depends wholly on quick thinking.
But I don’t except very much practical success and most of all, I think of this as a form of entertainment (“epiphany porn” as you like to call it) and when I have more important things to do, I will probably set this thing aside.