Co-Executive Director at ML Alignment & Theory Scholars Program (2022-present)
Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
Manifund Regrantor (2023-present) | RFPs here
Advisor, Catalyze Impact (2023-present) | ToC here
Advisor, AI Safety ANZ (2024-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
Gemini 3 estimates that there are 15-20k core ML academics and 100-150k supporting PhD students and Postdocs worldwide. If the TMLR sample is representative, this indicates that there are:
~20k academics interested in any of the above research areas.
~15k academics interested in the non-robustness research areas.
~5k academics interested in AI safety or alignment (note that this might include RLHF).