Give me feedback! :)
Current
Co-Director at ML Alignment & Theory Scholars Program (2022-current)
Co-Founder & Board Member at London Initiative for Safe AI (2023-current)
Manifund Regrantor (2023-current)
Past
Ph.D. in Physics from the University of Queensland (2017-2022)
Group organizer at Effective Altruism UQ (2018-2021)
I interpret your comment as assuming that new researchers with good ideas produce more impact on their own than in teams working towards a shared goal; this seems false to me. I think that independent research is usually a bad bet in general and that most new AI safety researchers should be working on relatively few impactful research directions, most of which are best pursued within a team due to the nature of the research (though some investment in other directions seems good for the portfolio).
I’ve addressed this a bit in thread, but here are some more thoughts:
New AI safety researchers seem to face mundane barriers to reducing AI catastrophic risk, including funding, infrastructure, and general executive function.
MATS alumni are generally doing great stuff (~46% currently work in AI safety/control, ~1.4% work on AI capabilities), but we can do even better.
Like any other nascent scientific/engineering discipline, AI safety will produce more impactful research with scale, albeit with some diminishing returns on impact eventually (I think we are far from the inflection point, however).
MATS alumni, as a large swathe of the most talented new AI safety researchers in my (possibly biased) opinion, should ideally not experience mundane barriers to reducing AI catastrophic risk.
Independent research seems worse than team-based research for most research that aims to reduce AI catastrophic risk:
“Pair-programming”, builder-breaker, rubber-ducking, etc. are valuable parts of the research process and are benefited by working in a team.
Funding insecurity and grantwriting responsibilities are larger for independent researchers and obstruct research.
Orgs with larger teams and discretionary funding can take on interns to help scale projects and provide mentorship.
Good prosaic AI safety research largely looks more like large teams doing engineering and less like lone geniuses doing maths. Obviously, some lone genius researchers (especially on mathsy non-prosaic agendas) seem great for the portfolio too, but these people seem hard to find/train anyways (so there is often more alpha in the former by my lights). Also, I have doubts that the optimal mechanism to incentivize “lone genius research” is via small independent grants instead of large bounties and academic nerdsniping.
Therefore, more infrastructure and funding for MATS alumni, who are generally value-aligned and competent, is good for reducing AI catastrophic risk in expectation.