Give me feedback! :)
Current
Co-Director at ML Alignment & Theory Scholars Program (2022-current)
Co-Founder & Board Member at London Initiative for Safe AI (2023-current)
Manifund Regrantor (2023-current)
Past
Ph.D. in Physics from the University of Queensland (2017-2022)
Group organizer at Effective Altruism UQ (2018-2021)
Thank you so much for conducting this survey! I want to share some information on behalf of MATS:
In comparison to the AIS survey gender ratio of 9 M:F, MATS Winter 2023-24 scholars and mentors were 4 M:F and 12 M:F, respectively. Our Winter 2023-24 applicants were 4.6 M:F, whereas our Summer 2024 applicants were 2.6 M:F, closer to the EA survey ratio of 2 M:F. This data seems to indicate a large recent change in gender ratios of people entering the AIS field. Did you find that your AIS survey respondents with more AIS experience were significantly more male than newer entrants to the field?
MATS Summer 2024 applicants and interested mentors similarly prioritized research to “understand existing models”, such as interpretability and evaluations, over research to “control the AI” or “make the AI solve it”, such as scalable oversight and control/red-teaming, over “theory work”, such as agent foundations and cooperative AI (note that some cooperative AI work is primarily empirical).
The forthcoming summary of our “AI safety talent needs” interview series generally agrees with this survey’s findings regarding the importance of “soft skills” and “work ethic” in impactful new AIS contributors. Watch this space!
In addition to supporting core established AIS research paradigms, MATS would like to encourage the development of new paradigms. For better or worse, the current AIS funding landscape seems to have a high bar for speculative research into new paradigms. Has AE Studios considered sponsoring significant bounties or impact markets for scoping promising new AIS research directions?
Did survey respondents mention how they proposed making AIS more multidisciplinary? Which established research fields are more needed in the AIS community?
Did EAs consider AIS exclusively a longtermist cause area, or did they anticipate near-term catastrophic risk from AGI?
Thank you for the kind donation to MATS as a result of this survey!