Office hubs: Expand SASS, which is in close proximity to the new OpenAI and Anthropic offices in Sydney. Start an AI safety hub in Canberra to support the new AISI. Successful AI safety hubs have benefited from prominent founding member organizations like ARC, CG, Redwood, and MIRI (for Constellation) and Apollo, BlueDot, and MATS (for LISA). Similarly, SASS should bring together orgs like the Gradient Institute and Harmony Intelligence in a shared space, and the new Canberra hub should be built around Good Ancestors. Office hubs can benefit member orgs by providing cheaper returns to scale, hosting shared networking events, facilitating collaboration, and providing a pipeline of strong new recruits in the form of office guests and members.
Training programs: Expand TARA and the Sydney AI Safety Fellowship program, focusing on accelerating top talent and building local mentorship capacity for future programs. Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Academic labs: Build relationships with AI/CS academics at UniMelb, Monash, USyd, ANU, UTS, UNSW, UA, UQ, etc. Help launch AI safety courses like Roy Rinberg and Boaz Barak did at Harvard. Other course inspiration is provided by Stanford and CAIS. Start AI safety academic labs like UC Berkeley CHAI, MIT AAG, NYU ARG, Bau Lab, Stanford HAI, CMU FOCAL, etc.
Conferences: Run an annual AI safety conference like the Australian AI Safety Forum 2024, bringing together academia, industry, government, and nonprofit field-builders. EAGx is probably not enough, as many people from academia, industry, and government likely won’t attend.
Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Could you clarify? Do you mean that if you have the chance to support someone new who would gain a lot since they haven’t participated in many AI safety programs or the chance to support someone more advanced, you’d suggest picking the later? With the reasoning being that the former might look like a better bet because of more room to make a difference, however boosting the latter increases the supply of mentors and therefore actually ends up benefiting beginners as least as much.
Yes, I would generally support picking the latter as they have a “faster time to mentorship/research leadership/impact” and the field seems currently bottlenecked on mentorship and research leads, not marginal engineers (though individual research leads might feel bottlenecked on marginal engineers).
We should prioritize people who already have research or engineering experience or a very high iteration speed as we are operating under time constraints; AGI is coming soon. Additionally, I think “research taste” will be more important than engineering ability given AI automation and this takes a long time to build; better to select people with existing research experience they can adapt from another field (also promotes interdisciplinary knowledge transfer).
What should be done? I think:
Office hubs: Expand SASS, which is in close proximity to the new OpenAI and Anthropic offices in Sydney. Start an AI safety hub in Canberra to support the new AISI. Successful AI safety hubs have benefited from prominent founding member organizations like ARC, CG, Redwood, and MIRI (for Constellation) and Apollo, BlueDot, and MATS (for LISA). Similarly, SASS should bring together orgs like the Gradient Institute and Harmony Intelligence in a shared space, and the new Canberra hub should be built around Good Ancestors. Office hubs can benefit member orgs by providing cheaper returns to scale, hosting shared networking events, facilitating collaboration, and providing a pipeline of strong new recruits in the form of office guests and members.
Training programs: Expand TARA and the Sydney AI Safety Fellowship program, focusing on accelerating top talent and building local mentorship capacity for future programs. Don’t focus on maximizing impact on participants; this is less important than reducing the mentorship bottleneck, which is best served by boosting the most advanced participants.
Academic labs: Build relationships with AI/CS academics at UniMelb, Monash, USyd, ANU, UTS, UNSW, UA, UQ, etc. Help launch AI safety courses like Roy Rinberg and Boaz Barak did at Harvard. Other course inspiration is provided by Stanford and CAIS. Start AI safety academic labs like UC Berkeley CHAI, MIT AAG, NYU ARG, Bau Lab, Stanford HAI, CMU FOCAL, etc.
Conferences: Run an annual AI safety conference like the Australian AI Safety Forum 2024, bringing together academia, industry, government, and nonprofit field-builders. EAGx is probably not enough, as many people from academia, industry, and government likely won’t attend.
Could you clarify? Do you mean that if you have the chance to support someone new who would gain a lot since they haven’t participated in many AI safety programs or the chance to support someone more advanced, you’d suggest picking the later? With the reasoning being that the former might look like a better bet because of more room to make a difference, however boosting the latter increases the supply of mentors and therefore actually ends up benefiting beginners as least as much.
Yes, I would generally support picking the latter as they have a “faster time to mentorship/research leadership/impact” and the field seems currently bottlenecked on mentorship and research leads, not marginal engineers (though individual research leads might feel bottlenecked on marginal engineers).
We should prioritize people who already have research or engineering experience or a very high iteration speed as we are operating under time constraints; AGI is coming soon. Additionally, I think “research taste” will be more important than engineering ability given AI automation and this takes a long time to build; better to select people with existing research experience they can adapt from another field (also promotes interdisciplinary knowledge transfer).
I talk more about it here.