I am an experienced data scientist and machine learning engineer with a background in neuroscience. In my previous job I had a senior position and lead a team of people, and spent ~10k weekly on hundreds of model training runs on AWS and GCP using pipelines I wrote from scratch which profitably guided the expenditure of hundreds of thousands of dollars daily in rapid real-time bidding. I’ve spent many years reading LessWrong and thinking about the alignment problem. I’ve been working full-time on independent AI alignment research for about a year and half now. I got a small grant from the Long Term Future Fund to help support my transition to working on AI alignment. I took the AI safety fundamentals course, and found it enjoyable and helpful even though it turned out I had already read all the assigned readings. I read a lot of research papers, in the alignment field specifically and ML and neuroscience generally. I’m friends with and talk regularly with employed AI alignment researchers. I’ve gone to 3 EAGs and had many interesting one-on-ones with people to discuss AI alignment and safety evals and governance, etc.
Over the past year and half I’ve applied and been rejected from many different alignment orgs and safety teams within capabilities orgs. I’m sick of trying to work entirely on my own with no colleagues, I work much better as part of a team. I’ve applied for more grant funding for independent research but I’m not happy about it. I’m considering trying to find a part-time mainstream ML job just so that I can have a team and work on building stuff productively again.
I’d love to start an org to pursue my alignment agenda, and feel like I have plenty of ideas to pursue to keep a handful of employees busy, and sufficient leadership experience to manage a team.
Here’s a video of a talk I gave in which I discuss some of my research ideas. This is only a small fraction of what I’ve been up to in the past couple years. Link to a recording of my recent talk at the Virtual AI Safety Unconference: https://youtu.be/UYqeI-UWKng
If you find my research ideas intriguing, and might be interested in forming an org with me or interviewing me as a possible fit to work at your existing org, please reach out. You can message me here on LessWrong and I’ll share my resume and email.
(Edit: this post got me connected with a good position!)
I am an experienced data scientist and machine learning engineer with a background in neuroscience. In my previous job I had a senior position and lead a team of people, and spent ~10k weekly on hundreds of model training runs on AWS and GCP using pipelines I wrote from scratch which profitably guided the expenditure of hundreds of thousands of dollars daily in rapid real-time bidding. I’ve spent many years reading LessWrong and thinking about the alignment problem. I’ve been working full-time on independent AI alignment research for about a year and half now. I got a small grant from the Long Term Future Fund to help support my transition to working on AI alignment. I took the AI safety fundamentals course, and found it enjoyable and helpful even though it turned out I had already read all the assigned readings. I read a lot of research papers, in the alignment field specifically and ML and neuroscience generally. I’m friends with and talk regularly with employed AI alignment researchers. I’ve gone to 3 EAGs and had many interesting one-on-ones with people to discuss AI alignment and safety evals and governance, etc.
Over the past year and half I’ve applied and been rejected from many different alignment orgs and safety teams within capabilities orgs. I’m sick of trying to work entirely on my own with no colleagues, I work much better as part of a team. I’ve applied for more grant funding for independent research but I’m not happy about it. I’m considering trying to find a part-time mainstream ML job just so that I can have a team and work on building stuff productively again.
I’d love to start an org to pursue my alignment agenda, and feel like I have plenty of ideas to pursue to keep a handful of employees busy, and sufficient leadership experience to manage a team.
Here’s a video of a talk I gave in which I discuss some of my research ideas. This is only a small fraction of what I’ve been up to in the past couple years. Link to a recording of my recent talk at the Virtual AI Safety Unconference: https://youtu.be/UYqeI-UWKng
If you find my research ideas intriguing, and might be interested in forming an org with me or interviewing me as a possible fit to work at your existing org, please reach out. You can message me here on LessWrong and I’ll share my resume and email.
(Edit: this post got me connected with a good position!)
I think your ideas are some of the most promising I’ve seen- I’d love to see them pursued further, though I’m concerned about the air-gaping