Hi, I’m Lincoln. I am 25; I live and work in Cambridge, MA. I currently build video games but I’m going to start a Ph.D program in Computer Science at the local university in the fall.
I identified rationality as a thing to be achieved ever since I knew there was a term for it. One of the minor goals I had since I was about 15 was devising a system of morality which fit with my own intuitions but which was consistent under reflection (but not in so many words). The two thought experiments I focused on were abortion and voting. I didn’t come up with an answer, but I knew that such a morality was a thing I wanted—consistency was important to me.
I ran across Eliezer’s work 907 days ago reading a Hacker News post about the AI-box experiment, and various other Overcoming Bias posts that were submitted over the years. I didn’t immediately follow through on that stuff.
But I became aware of SIAI about 10 months ago, when rms on Hacker News linked an interesting post about the Visiting Fellows program at SIAI.
I think I had a “click” moment: I immediately saw that AI was both an existential risk and major opportunity, and I wanted to work on these things to save the world. I followed links and ended up at LW; I didn’t immediately understand the connection between AI and rationality, but they both looked interesting and useful, so I bookmarked LW.
I immediately sent in an application to the Visiting Fellows program, thinking “hey, I should figure out how to do this”—I think it was Jasen who responded and asked me by email to summarize the purpose of SIAI and how I thought I could contribute. I wrote the purpose summary, but got stuck on how to contribute. I had barely read any of the Sequences at that time and had no idea how I could be useful. For those reasons (as well as a healthy dose of akrasia), I gave up on my application at that time.
Somewhere in there I found HP:MoR (perhaps via TVTropes?), saw the author was “Less Wrong” and made the connection.
Since then, I have been inhaling the Sequences; in the last month I’ve been checking the front page almost daily. I applied to the Rationality Boot Camp.
I’m very far from being a rationalist—I can see that my rationality skills are really quite poor, but I at least identify as a student of rationality.
Hey, I am in kind of in a similar situation as you. I’ve worked on making games (as a programmer) for several years, and currently I’m working on a game of my own, where I incorporate certain ideas from LessWrong.
I’ve been wondering lately if I could contribute more if I did FAI related research. What convinced you to switch to it? How much do you think you’ll contribute? How talented are you and how much of a deciding factor was that?
Hi, I’m Lincoln. I am 25; I live and work in Cambridge, MA. I currently build video games but I’m going to start a Ph.D program in Computer Science at the local university in the fall.
I identified rationality as a thing to be achieved ever since I knew there was a term for it. One of the minor goals I had since I was about 15 was devising a system of morality which fit with my own intuitions but which was consistent under reflection (but not in so many words). The two thought experiments I focused on were abortion and voting. I didn’t come up with an answer, but I knew that such a morality was a thing I wanted—consistency was important to me.
I ran across Eliezer’s work 907 days ago reading a Hacker News post about the AI-box experiment, and various other Overcoming Bias posts that were submitted over the years. I didn’t immediately follow through on that stuff.
But I became aware of SIAI about 10 months ago, when rms on Hacker News linked an interesting post about the Visiting Fellows program at SIAI.
I think I had a “click” moment: I immediately saw that AI was both an existential risk and major opportunity, and I wanted to work on these things to save the world. I followed links and ended up at LW; I didn’t immediately understand the connection between AI and rationality, but they both looked interesting and useful, so I bookmarked LW.
I immediately sent in an application to the Visiting Fellows program, thinking “hey, I should figure out how to do this”—I think it was Jasen who responded and asked me by email to summarize the purpose of SIAI and how I thought I could contribute. I wrote the purpose summary, but got stuck on how to contribute. I had barely read any of the Sequences at that time and had no idea how I could be useful. For those reasons (as well as a healthy dose of akrasia), I gave up on my application at that time.
Somewhere in there I found HP:MoR (perhaps via TVTropes?), saw the author was “Less Wrong” and made the connection.
Since then, I have been inhaling the Sequences; in the last month I’ve been checking the front page almost daily. I applied to the Rationality Boot Camp.
I’m very far from being a rationalist—I can see that my rationality skills are really quite poor, but I at least identify as a student of rationality.
That’s me, welcome to Less Wrong! Glad to form some part of your personal causal history.
Update: I got into Rationality Boot Camp, which is starting tomorrow. Thanks for posting that on HN! I wouldn’t (probably) be here otherwise.
Hey, I am in kind of in a similar situation as you. I’ve worked on making games (as a programmer) for several years, and currently I’m working on a game of my own, where I incorporate certain ideas from LessWrong. I’ve been wondering lately if I could contribute more if I did FAI related research. What convinced you to switch to it? How much do you think you’ll contribute? How talented are you and how much of a deciding factor was that?