I am building a startup focused on making this kind of thing exceptionally easy for AI safety researchers. I’ve been working as an AI safety researcher for a few years. I’ve been building an initial prototype and I am in the process of integrating it easily into AI research workflows. So, with respect to this post, I’ve been actively working towards building a prototype for the “AI research fleets”.
I am actively looking for a CTO I can build with to +10x alignment research in the next 2 years. I’m looking for someone absolutely cracked and it’s fine if they already have a job (I’ll give my pitch and let them decide).
If that’s you or you know anyone who could fill that role (or who I could talk to that might know), then please let me know!
For alignment researchers or people in AI safety research orgs: hit me up if you want to be pinged for beta testing when things are ready.
For orgs, I’d be happy to work with you to setup automations or give a masterclass on the latest AI tools/automation workflows and maybe provide a custom report (with a video overview) each month so that you can focus on research rather than trying new tools that might not be relevant to your org.
Additional context:
“When we say “automating alignment research,” we mean a mix of Sakana AI’s AI scientist (specialized for alignment), Transluce’s work on using AI agents for alignment research, test-timecompute scaling, and research into using LLMs for coming up with novel AI safety ideas. This kind of work includes empirical alignment (interpretability, unlearning, evals) and conceptual alignment research (agent foundations).
We believe that it is now the right time to take on this project and build this startup because we are nearing the point where AIs could automate parts of research and may be able to do so sooner with the right infrastructure, data, etc.
Hey Ben and Jesse!
This comment is more of a PSA:
I am building a startup focused on making this kind of thing exceptionally easy for AI safety researchers. I’ve been working as an AI safety researcher for a few years. I’ve been building an initial prototype and I am in the process of integrating it easily into AI research workflows. So, with respect to this post, I’ve been actively working towards building a prototype for the “AI research fleets”.
I am actively looking for a CTO I can build with to +10x alignment research in the next 2 years. I’m looking for someone absolutely cracked and it’s fine if they already have a job (I’ll give my pitch and let them decide).
If that’s you or you know anyone who could fill that role (or who I could talk to that might know), then please let me know!
For alignment researchers or people in AI safety research orgs: hit me up if you want to be pinged for beta testing when things are ready.
For orgs, I’d be happy to work with you to setup automations or give a masterclass on the latest AI tools/automation workflows and maybe provide a custom report (with a video overview) each month so that you can focus on research rather than trying new tools that might not be relevant to your org.
Additional context:
“When we say “automating alignment research,” we mean a mix of Sakana AI’s AI scientist (specialized for alignment), Transluce’s work on using AI agents for alignment research, test-time compute scaling, and research into using LLMs for coming up with novel AI safety ideas. This kind of work includes empirical alignment (interpretability, unlearning, evals) and conceptual alignment research (agent foundations).
We believe that it is now the right time to take on this project and build this startup because we are nearing the point where AIs could automate parts of research and may be able to do so sooner with the right infrastructure, data, etc.
We intend to study how our organization’s work can integrate with the Safeguarded AI thesis by Davidad.”
I’m currently in London for the month as part of the Catalyze Impact programme.
If interested, send me a message on LessWrong or X or email (thibo.jacques @ gmail dot com).