Fantastic post, particularly interesting to me as since last December I decided to pivot and work on AI safety (it seems the most useful thing to do). In the spirit of sharing different perspectives, I will also share what I tried and failed at doing since then, so someone else might avoid my failures and recognize more easily if your suggested plan (hobbies while in stable job) might not work for them (as it didn’t for me).
The TL-DR: My job drains too much energy from me to be productive on AI safety in my spare time (I tried and failed), nor can I easily get another job with money and worklife-balance. Thus, I prefer quitting my job and doing AI safety full time with no funding, until either I can get paid doing that, or I have to return to a job (but at least I will have produced something instead of nothing).
More context : I’m a software engineer doing backend development (code interacting with databases) in a startup. My strongest passions are game theory, psychology, education, ethics. If there was no urgency to AGI as x-risk, I would work on improving education and ethics (same vibe as the sequences wanting to train rationalists). I’m recently graduated from engineering school and have been working for close to 2 years with a decent salary. I discovered late in my studies that I don’t like working as a programmer (because https://www.lesswrong.com/posts/x6Kv7nxKHfLGtPJej/the-topic-is-not-the-content). I have a personality where it seems I can only do good work on things that interest me (this is normal to an extent for everyone, I’m pointing it’s stronger than normal for me to the point I often cannot do mental work on something which doesn’t interest me (even though I can force myself to do physical work)).
So my diploma and experience are for a job I don’t enjoy and which drains me. The usual circumstances of working for a company (working around certain time slots, optimizing your work for company profit over utility) are also a burden. It was in this context I decided last December to try pivoting into AI alignment research and I was also fortunate to have regular discussions with one AI alignment researcher on my topic, what I wanted to learn and write (he deserves credit for that effort and I’m only not mentioning him because my failure is my own). I found a couple papers I enjoyed, chose one to analyze in detail, and chose the goal of publishing a short LW article doing some further work on that paper. I totally failed doing that. The full list of reasons is of course diverse and complicated, but the main one is I put little time into it, and the time I had was low energy. I could follow that rate for a year and produce almost no value. Other reason for failure : I mistakenly chose a paper which wasn’t centrally about AI safety, so the work I produced was pretty useless for solving the actual problems of AI safety.
So I abandoned that plan. Instead, I negotiated a raise in salary, kept my living expenses low and saved money. I’m now at the point where I can quit my job (and am planning to do so soon) and actually try AI safety research in suitable conditions, with the only caveat being limited time. Doing this, I hope to know much better how good a personal fit I have to alignment research and to other related work (governance, pedagogy) and choose the rest of my path accordingly.
In general, I would advise trying the hobby path first, while staying aware it won’t work for everyone (a failure in being productive in something as a hobby doesn’t mean you won’t be good at it as a job). If it doesn’t work, you have several others options as presented in the post, and I contribute the one I am currently doing : gain enough money to get some real free time (probably at least 3 months, 6 should be enough, probably no point to more than 12) and try doing that thing you wanted to do.
Formulating actionable research projects with predictably legible results takes some fluency in a topic. This means a mentor who would do this for you (adjusting as you progress), unusual luck, unusual talent, or studying for at least a few years to figure out what’s going on. In hindsight, both mentor-guided research projects and self-directed study projects can turn out to be appallingly inefficient uses of time, so some care in picking these might go a long way.
Fantastic post, particularly interesting to me as since last December I decided to pivot and work on AI safety (it seems the most useful thing to do). In the spirit of sharing different perspectives, I will also share what I tried and failed at doing since then, so someone else might avoid my failures and recognize more easily if your suggested plan (hobbies while in stable job) might not work for them (as it didn’t for me).
The TL-DR: My job drains too much energy from me to be productive on AI safety in my spare time (I tried and failed), nor can I easily get another job with money and worklife-balance. Thus, I prefer quitting my job and doing AI safety full time with no funding, until either I can get paid doing that, or I have to return to a job (but at least I will have produced something instead of nothing).
More context : I’m a software engineer doing backend development (code interacting with databases) in a startup. My strongest passions are game theory, psychology, education, ethics. If there was no urgency to AGI as x-risk, I would work on improving education and ethics (same vibe as the sequences wanting to train rationalists). I’m recently graduated from engineering school and have been working for close to 2 years with a decent salary. I discovered late in my studies that I don’t like working as a programmer (because https://www.lesswrong.com/posts/x6Kv7nxKHfLGtPJej/the-topic-is-not-the-content). I have a personality where it seems I can only do good work on things that interest me (this is normal to an extent for everyone, I’m pointing it’s stronger than normal for me to the point I often cannot do mental work on something which doesn’t interest me (even though I can force myself to do physical work)).
So my diploma and experience are for a job I don’t enjoy and which drains me. The usual circumstances of working for a company (working around certain time slots, optimizing your work for company profit over utility) are also a burden. It was in this context I decided last December to try pivoting into AI alignment research and I was also fortunate to have regular discussions with one AI alignment researcher on my topic, what I wanted to learn and write (he deserves credit for that effort and I’m only not mentioning him because my failure is my own). I found a couple papers I enjoyed, chose one to analyze in detail, and chose the goal of publishing a short LW article doing some further work on that paper. I totally failed doing that. The full list of reasons is of course diverse and complicated, but the main one is I put little time into it, and the time I had was low energy. I could follow that rate for a year and produce almost no value. Other reason for failure : I mistakenly chose a paper which wasn’t centrally about AI safety, so the work I produced was pretty useless for solving the actual problems of AI safety.
So I abandoned that plan. Instead, I negotiated a raise in salary, kept my living expenses low and saved money. I’m now at the point where I can quit my job (and am planning to do so soon) and actually try AI safety research in suitable conditions, with the only caveat being limited time. Doing this, I hope to know much better how good a personal fit I have to alignment research and to other related work (governance, pedagogy) and choose the rest of my path accordingly.
In general, I would advise trying the hobby path first, while staying aware it won’t work for everyone (a failure in being productive in something as a hobby doesn’t mean you won’t be good at it as a job). If it doesn’t work, you have several others options as presented in the post, and I contribute the one I am currently doing : gain enough money to get some real free time (probably at least 3 months, 6 should be enough, probably no point to more than 12) and try doing that thing you wanted to do.
Formulating actionable research projects with predictably legible results takes some fluency in a topic. This means a mentor who would do this for you (adjusting as you progress), unusual luck, unusual talent, or studying for at least a few years to figure out what’s going on. In hindsight, both mentor-guided research projects and self-directed study projects can turn out to be appallingly inefficient uses of time, so some care in picking these might go a long way.