Speaking purely personally: when I joined the Alignment team at OpenAI in January, I saw there was more safety research than I’d expected. Not to mention interesting thinking on the future of alignment. But that research & thinking didn’t really have a place to go, considering it’s often too short or informal for the main OpenAI blog, and most OpenAI researchers aren’t on LessWrong. I’m hoping the blog is a more informal, lower-friction home than the main blog, and this new avenue of publishing encourages sharing and transparency.
Announcing: OpenAI’s Alignment Research Blog
The OpenAI Alignment Research Blog launched today at 11 am PT! With 1 introductory post, and 2 technical posts.
Blog: https://alignment.openai.com/
Thread on X: https://x.com/j_asminewang/status/1995569301714325935
Speaking purely personally: when I joined the Alignment team at OpenAI in January, I saw there was more safety research than I’d expected. Not to mention interesting thinking on the future of alignment. But that research & thinking didn’t really have a place to go, considering it’s often too short or informal for the main OpenAI blog, and most OpenAI researchers aren’t on LessWrong. I’m hoping the blog is a more informal, lower-friction home than the main blog, and this new avenue of publishing encourages sharing and transparency.