We want to find people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. We’re particularly interested in building out “safety infrastructure” and developing risk assessments that can inform policymakers and spur global coordination on AI safety. For example, this would include experience running evals for LLMs, experience with model pretraining, finetuning, or RL, and experience in technical research in the societal impacts of models. But we’re open to hearing what should be done beyond this as well.
From Zvi’s linked Google form in the post: