Finally Entering Alignment

Background

Reading Death With Dignity combined with recent advancements shortening my timeline has finally made me understand on a gut level that nature is allowed to kill me. Because of this alignment has gone from “An interesting career path I might pursue after I finish studying” to “If I don’t do something I’ll die while there was something else I could have done.”

A big part of writing this is to get tailored advice, so a bit about my current skills that could be useful to the cause

  • Intermediate programmer, haven’t done much stuff in AI, but I’m confident I can learn quickly. I’m good with Linux and can do sysadmin-y things.

  • Know a good amount of math, I’ve read textbooks on real analysis and linear algebra. With some effort, I think I can understand technical alignment papers though I’m far away from writing them.

  • I’m 17 and unschooled meaning I have nearly infinite free time and no financial obligations.

Babble

Inspired by Entering at the 11th Hour here’s a list of some things I can do that may contribute (not quite “babble” but close enough)

  1. Learn ML engineering and apply to AI safety orgs. I believe I can get to level talked about in AI Safety Needs Great Engineers in a few months of focused practice.

  2. Deal with alignment research debt by summarizing results, making exposition, quiz games, etc.

  3. Help engineers learn math. Hold study groups, tutor etc.

  4. Help researchers learn engineering (I’m not an amazing programmer, but a lot of research code is questionable to say the least)

  5. Donate money to safety orgs. (will wait till I’m familiar with all the options, so I can weight based on likelihood of success)

  6. Host AI/​EA/​Rationality meetup. I live in the middle of nowhere, so this would be the only one

  7. Try to convince some young math prodigies that alignment is important (I’ve run into a few in math groups before)

  8. Make a website/​YT/​podcast debating AGI with people in order to convince people and raise awareness. (changing one person’s carrier path is worth a lot)

  9. Lobby local politicians, see if anyone I know has connections and can put a word in

  10. Become active on LessWrong and ElutherAI in order to find friends who’ll help along the way. Hard for me because of impostor syndrome right now (you don’t want to know how long writing this post took).

Reflection

I most like (1), (2-4) and (6). (7) is something I’ll do next time I have the chance.

I’m going to spend my working time studying the engineering needed to get hired at a safety org. If anyone here is good at programming and bad at math (or the converse) please contact me, I’d love to help (teaching helps me learn a subject too, so don’t be shy).

Updates

Got accepted to Atlas after applying due to prompting in the comments, which progressed to me being more and more involved in the Berkeley rationalist scene (e.g. I stayed in an experimental group house for a month in October-November of 2022), and now I’m doing serimats under Alex Turner till March of 2023.

I still have a long way to go before aligning the AI, but I’m making progress :)