I grew up as one of those “mathematically gifted” kids in a tiny rural town. I turned away from mathematics towards computer science (which I loved) and economics (which I decided I needed to understand if I wanted to save the world). I went on to became a software engineer at Google.
At the intersection of computer science and economics I fueled a strong belief that the world is broken and that we could do far better if we redesigned social structure from scratch, now that we have so much more knowledge & technology than we did when we created these antiquated governments. I despaired that most think progress entails playing the political tug of war instead of building a better system. I spent a long time refining my ideas.
In the interim I missed a number of opportunities to discover this site. In 2008 I stumbled across the Quantum Physics sequence on Overcoming Bias. I read it up till where it was still being written, then moved on. In 2010, I found HPMoR. I read it, noticed the links to this site, and poked around a little. Nothing came of it. I caught up to where HPMoR was being written, then put it out of my mind. I had more important things to do. I had big ideas to express, and I started writing them down.
At some point along the way I realized I needed more math. To my horror, I found that the math I had been so good at as kid was largely memorized, not deeply understood. I knew how to manipulate symbols like nobody’s business, but I wouldn’t have been able to re-invent the things I “knew” if you erased them from my mind. (In LW terms, I had memorized many passwords). I started going back through what I thought I knew and groking it.
During my journey, sometime early in 2012, I stumbled across the Quantum Physics sequence on LessWrong. From the summaries, it seemed like a good way to quickly evaluate how much of my QM knowledge was cached passwords and how much I had really learned. I started reading it and experienced a strong sense of deja vu. I figured out that LW was seeded by Overcoming Bias, experienced some nostalgia, put the feeling to rest, and moved on.
Relearning math and learning to write morphed into a more general quest to promote clear thinking and better methods of deduction with a long-term goal of bridging my pet inferential gap. As I researched and wrote, this one site kept popping up in my search results—LessWrong.
Around the same time (late 2012) I heard about updates to HPMoR. I hadn’t been following it for years, but I was suddenly reminded why the site felt so familiar. I’m not exactly sure how everything fell into place, but some combination LessWrong showing up in my research, a recollection that HPMoR was associated, and the remembered nostalgia from the Quantum Physics sequence all came together. I finally decided to see what this site was all about.
The rest is history. I tore through the sequences. Much of it was extremely validating: Mysterious Answers and Politics is the Mindkiller expressed much of what I had set out to say. I’ve always planned to cheat death. I attempted a similar dissolution of “free will” a few years back. The rest of it was largely epiphany porn.
The strongest epiphany came when I was introduced to the idea of UFAI. From my vantage point between economics and computer science, everything clicked. Hard.
I’d taken AI courses, but AI was a “centuries in the future” sort of vagary. My primary concern was with finding a way to “refactor” governments (and create meta-governments, as I do not claim to know the best way to run a society). To me, that was The Way To Save The World™ -- until I actually thought about UFAI.
I didn’t need any convincing. I simply… hadn’t considered it before. Upon first reflection, the scope of the problem became clear. I experienced panic, and not because UFAI is scary: overnight, my Way To Save The World was eclipsed by a threat that darkens the entire future.
It’s hard to overstate how much my ideals motivate me. The AI problem shook me to my core. I’d ostensibly been trying to save the world, how could I miss something as obvious as UFAI? How could I take my ideals seriously if I’d misunderstood the problem so hard that I hadn’t considered existential threats? In light of this new information, what should I really be doing to ensure a bright future?
I went into philosophical-panic reevaluate-everything mode. That was a few months ago. I’ve done a lot of reflection. I’m still a bit shaken. I have grand ideas about how we can get to a better social structure from here and a lot of inertial passion along those lines. I don’t know nearly enough math. I feel like I’m late to the party, passionate but impotent. I’m trying to find a way to help beyond donating to MIRI. I feel outclassed here, which is probably a good thing. I’m working on getting stronger. I have a lot to do.
I’m Nate. I’m 23. My road here was a winding one.
I grew up as one of those “mathematically gifted” kids in a tiny rural town. I turned away from mathematics towards computer science (which I loved) and economics (which I decided I needed to understand if I wanted to save the world). I went on to became a software engineer at Google.
At the intersection of computer science and economics I fueled a strong belief that the world is broken and that we could do far better if we redesigned social structure from scratch, now that we have so much more knowledge & technology than we did when we created these antiquated governments. I despaired that most think progress entails playing the political tug of war instead of building a better system. I spent a long time refining my ideas.
In the interim I missed a number of opportunities to discover this site. In 2008 I stumbled across the Quantum Physics sequence on Overcoming Bias. I read it up till where it was still being written, then moved on. In 2010, I found HPMoR. I read it, noticed the links to this site, and poked around a little. Nothing came of it. I caught up to where HPMoR was being written, then put it out of my mind. I had more important things to do. I had big ideas to express, and I started writing them down.
At some point along the way I realized I needed more math. To my horror, I found that the math I had been so good at as kid was largely memorized, not deeply understood. I knew how to manipulate symbols like nobody’s business, but I wouldn’t have been able to re-invent the things I “knew” if you erased them from my mind. (In LW terms, I had memorized many passwords). I started going back through what I thought I knew and groking it.
During my journey, sometime early in 2012, I stumbled across the Quantum Physics sequence on LessWrong. From the summaries, it seemed like a good way to quickly evaluate how much of my QM knowledge was cached passwords and how much I had really learned. I started reading it and experienced a strong sense of deja vu. I figured out that LW was seeded by Overcoming Bias, experienced some nostalgia, put the feeling to rest, and moved on.
Relearning math and learning to write morphed into a more general quest to promote clear thinking and better methods of deduction with a long-term goal of bridging my pet inferential gap. As I researched and wrote, this one site kept popping up in my search results—LessWrong.
Around the same time (late 2012) I heard about updates to HPMoR. I hadn’t been following it for years, but I was suddenly reminded why the site felt so familiar. I’m not exactly sure how everything fell into place, but some combination LessWrong showing up in my research, a recollection that HPMoR was associated, and the remembered nostalgia from the Quantum Physics sequence all came together. I finally decided to see what this site was all about.
The rest is history. I tore through the sequences. Much of it was extremely validating: Mysterious Answers and Politics is the Mindkiller expressed much of what I had set out to say. I’ve always planned to cheat death. I attempted a similar dissolution of “free will” a few years back. The rest of it was largely epiphany porn.
The strongest epiphany came when I was introduced to the idea of UFAI. From my vantage point between economics and computer science, everything clicked. Hard.
I’d taken AI courses, but AI was a “centuries in the future” sort of vagary. My primary concern was with finding a way to “refactor” governments (and create meta-governments, as I do not claim to know the best way to run a society). To me, that was The Way To Save The World™ -- until I actually thought about UFAI.
I didn’t need any convincing. I simply… hadn’t considered it before. Upon first reflection, the scope of the problem became clear. I experienced panic, and not because UFAI is scary: overnight, my Way To Save The World was eclipsed by a threat that darkens the entire future.
It’s hard to overstate how much my ideals motivate me. The AI problem shook me to my core. I’d ostensibly been trying to save the world, how could I miss something as obvious as UFAI? How could I take my ideals seriously if I’d misunderstood the problem so hard that I hadn’t considered existential threats? In light of this new information, what should I really be doing to ensure a bright future?
I went into philosophical-panic reevaluate-everything mode. That was a few months ago. I’ve done a lot of reflection. I’m still a bit shaken. I have grand ideas about how we can get to a better social structure from here and a lot of inertial passion along those lines. I don’t know nearly enough math. I feel like I’m late to the party, passionate but impotent. I’m trying to find a way to help beyond donating to MIRI. I feel outclassed here, which is probably a good thing. I’m working on getting stronger. I have a lot to do.
Hello!
...
We need to talk more.
Let’s. I’m on the east coast until Aug 11. Perhaps we can meet up after work on the week of the 12th.
(Context for others: The two of us met briefly at a meetup in June and exchanged usernames, but haven’t spoken much.)
Do you have a recommendation for how to pronounce ‘So8res’?
There’s no canonical pronunciation; I enjoy the ambiguity. My surname (Soares) is pronounced “SOAR-ees” by my family, if that helps any.
So like how a Canadian would pronounce multiple apologies? I like it.