Co-Executive Director, Machine Alignment, Transparency, and Security (MATS) Research (2022-present)
Co-Founder & Board Member, London Initiative for Safe AI (2023-present)
Board Member, Catalyze Impact (2026-present) | ToC here
Manifund Regrantor (2023-present) | RFPs here
Advisor, AI Safety ANZ (2024-present)
Advisor, Pivotal Research (2024-present)
Advisor, Halcyon Futures (2025-present)
Advisor, Black in AI Safety and Ethics (2025-present)
Advisor, Alignment Foundation (2026-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Personal website: ryankidd.ai
Give me feedback! :)
Thanks for replying! I listed these organizations because they all maintain up-to-date repositories of papers they contributed to. If you added all papers linked there (and I think you should, as they are all AI safety papers), I suspect you would have ~400-500 papers, many of which would not be included in your initial 200!
If I were running this project, I would additionally scrape papers from the websites of all the orgs listed on the AI safety map (they have a spreadsheet of orgs) and 80,000 Hours org list.