Hello! I’m not really sure which facts about me are useful in this introduction, but I’ll give it a go: I am a Software QA Specialist / SDET, I used to write songs as a hobby, and my partner thinks I look good in cyan.
I have found myself drawn to LessWrong for at least three reasons:
I am very concerned about existential and extinction risk from advanced AI
I enjoy reading about interesting topics and broadening and filling out my world model
I would very much like to be a more rational person
Lots of words about thing 1: In the past few months, I have deliberately changed how I spend my productive free time, which I now mostly occupy by trying to understand and communicate about AI x-risk, as well as helping with related projects. I have only a rudimentary / layman’s understanding of Machine Learning, and I have failed pretty decisively in the past when attempting mathematical research, so I don’t see myself ever being in an alignment research role. I’m focused on helping in small ways with things like outreach, helping build part of the alignment ecosystem, and directing a percentage of my income to related causes. (If I start writing music again, it will probably either be because I think alignment succeeded or because I think that we are already doomed. Either way, I hope I make time for dancing. …Yeah. There should be more dancing.)
Some words about thing 2: I am just so glad to have found a space on the internet that holds its users to a high standard of discourse. Reading LessWrong posts and comments tends to feel like I have been prepared a wholesome meal by a professional chef. It’s a welcome break from the home-cooking of my friends, my family, and myself, and especially from the fast-food (or miscellaneous hard drugs) of many other platforms.
Frankly just a whole sack of words about thing 3: For my whole life until a few short years ago, I was a conservative evangelical Christian, a creationist, a wholesale climate science denier, and generally a moderately conspiratorial thinker. I was sincere in my beliefs and held truth as the highest virtue. I really wanted to get everything right (including understanding and leaving space for the fact that I couldn’t get everything right). I really thought that I was a rational person and that I was generally correct about the nature of reality. Some of my beliefs were updated in college, but my religious convictions didn’t begin to unravel until a couple years after I graduated. It wasn’t pretty. The gradual process of discovering how wrong I was about an increasingly long list of things that were important to me was roughly as pleasant as I imagine a slow death to be. Eventually coming out to my friends and family as an atheist wasn’t a good time, either. (In any case, here I still am, now a strangely fortunate person, all things considered.) The point is, I have often been caught applying my same old irrational thought patterns to other things, so I have been working to reduce the frequency of those mistakes. If AI risk didn’t loom large in my mind, I would still greatly appreciate this site and its contributors for the service they are doing for my reasoning. I’m undoubtedly still wrong about many important things, and I’m hoping that over time and with effort, I can manage to become slightly less wrong. (*roll credits)
Hello! I’m not really sure which facts about me are useful in this introduction, but I’ll give it a go:
I am a Software QA Specialist / SDET, I used to write songs as a hobby, and my partner thinks I look good in cyan.
I have found myself drawn to LessWrong for at least three reasons:
I am very concerned about existential and extinction risk from advanced AI
I enjoy reading about interesting topics and broadening and filling out my world model
I would very much like to be a more rational person
Lots of words about thing 1: In the past few months, I have deliberately changed how I spend my productive free time, which I now mostly occupy by trying to understand and communicate about AI x-risk, as well as helping with related projects.
I have only a rudimentary / layman’s understanding of Machine Learning, and I have failed pretty decisively in the past when attempting mathematical research, so I don’t see myself ever being in an alignment research role. I’m focused on helping in small ways with things like outreach, helping build part of the alignment ecosystem, and directing a percentage of my income to related causes.
(If I start writing music again, it will probably either be because I think alignment succeeded or because I think that we are already doomed. Either way, I hope I make time for dancing. …Yeah. There should be more dancing.)
Some words about thing 2: I am just so glad to have found a space on the internet that holds its users to a high standard of discourse. Reading LessWrong posts and comments tends to feel like I have been prepared a wholesome meal by a professional chef. It’s a welcome break from the home-cooking of my friends, my family, and myself, and especially from the fast-food (or miscellaneous hard drugs) of many other platforms.
Frankly just a whole sack of words about thing 3: For my whole life until a few short years ago, I was a conservative evangelical Christian, a creationist, a wholesale climate science denier, and generally a moderately conspiratorial thinker. I was sincere in my beliefs and held truth as the highest virtue. I really wanted to get everything right (including understanding and leaving space for the fact that I couldn’t get everything right). I really thought that I was a rational person and that I was generally correct about the nature of reality.
Some of my beliefs were updated in college, but my religious convictions didn’t begin to unravel until a couple years after I graduated. It wasn’t pretty. The gradual process of discovering how wrong I was about an increasingly long list of things that were important to me was roughly as pleasant as I imagine a slow death to be. Eventually coming out to my friends and family as an atheist wasn’t a good time, either. (In any case, here I still am, now a strangely fortunate person, all things considered.)
The point is, I have often been caught applying my same old irrational thought patterns to other things, so I have been working to reduce the frequency of those mistakes. If AI risk didn’t loom large in my mind, I would still greatly appreciate this site and its contributors for the service they are doing for my reasoning. I’m undoubtedly still wrong about many important things, and I’m hoping that over time and with effort, I can manage to become slightly less wrong. (*roll credits)