Cheers, Akash! Yep, our confirmed mentor list updated in the days after publishing this retrospective. Our website remains the best up-to-date source for our Summer/Winter plans.
Do you think this is the best thing for MATS to be focusing on, relative to governance/policy?
MATS is not currently bottlenecked on funding for our current Summer plans and hopefully won’t be for Winter either. If further interested high-impact AI gov mentors appear in the next month or two (and some already seem to be appearing), we will boost this component of our Winter research portfolio. If ERA disappeared tomorrow, we would do our best to support many of their AI gov mentors. In my opinion, MATS is currently not sacrificing opportunities to significantly benefit AI governance and policy; rather, we are rate-limited by factors outside of our control and are taking substantial steps to circumvent these, including:
Substantial outreach to potential AI gov mentors;
Pursuing institutional partnerships with key AI gov/policy orgs;
Offering institutional support and advice to other training programs;
Considering alternative program forms less associated with rationality/longtermism;
Connecting scholars and alumni with recommended opportunities in AI gov/policy;
Regularly recommending scholars and alumni to AI gov/policy org hiring managers.
We appreciate further advice to this end!
Do you think there are some cultural things that ought to be examined to figure out why scaling labs are so much more attractive than options that at-least-to-me seem more impactful in expectation?
I think this is a good question, but it might be misleading in isolation. I would additionally ask:
“How many people are the AISIs, METR, and Apollo currently hiring and are they mainly for technical or policy roles? Do we expect this to change?”
“Are the available job opportunities for AI gov researchers and junior policy staffers sufficient to justify pursuing this as a primary career pathway if one is already experienced at ML and particularly well-suited (e.g., dispositionally) for empirical research?”
“Is there a large demand for AI gov researchers with technical experience in AI safety and familiarity with AI threat models, or will most roles go to experienced policy researchers, including those transitioning from other fields? If the former, where should researchers gain technical experience? If the latter, should we be pushing junior AI gov training programs or retraining bootcamps/workshops for experienced professionals?”
“Are existing talent pipelines into AI gov/policy meeting the needs of established research organizations and think tanks (e.g., RAND, GovAI, TFS, IAPS, IFP, etc.)? If not, where can programs like MATS/ERA/etc. best add value?”
“Is there a demand for more organizations like CAIP? If so, what experience do the founders require?”
Cheers, Akash! Yep, our confirmed mentor list updated in the days after publishing this retrospective. Our website remains the best up-to-date source for our Summer/Winter plans.
MATS is not currently bottlenecked on funding for our current Summer plans and hopefully won’t be for Winter either. If further interested high-impact AI gov mentors appear in the next month or two (and some already seem to be appearing), we will boost this component of our Winter research portfolio. If ERA disappeared tomorrow, we would do our best to support many of their AI gov mentors. In my opinion, MATS is currently not sacrificing opportunities to significantly benefit AI governance and policy; rather, we are rate-limited by factors outside of our control and are taking substantial steps to circumvent these, including:
Substantial outreach to potential AI gov mentors;
Pursuing institutional partnerships with key AI gov/policy orgs;
Offering institutional support and advice to other training programs;
Considering alternative program forms less associated with rationality/longtermism;
Connecting scholars and alumni with recommended opportunities in AI gov/policy;
Regularly recommending scholars and alumni to AI gov/policy org hiring managers.
We appreciate further advice to this end!
I think this is a good question, but it might be misleading in isolation. I would additionally ask:
“How many people are the AISIs, METR, and Apollo currently hiring and are they mainly for technical or policy roles? Do we expect this to change?”
“Are the available job opportunities for AI gov researchers and junior policy staffers sufficient to justify pursuing this as a primary career pathway if one is already experienced at ML and particularly well-suited (e.g., dispositionally) for empirical research?”
“Is there a large demand for AI gov researchers with technical experience in AI safety and familiarity with AI threat models, or will most roles go to experienced policy researchers, including those transitioning from other fields? If the former, where should researchers gain technical experience? If the latter, should we be pushing junior AI gov training programs or retraining bootcamps/workshops for experienced professionals?”
“Are existing talent pipelines into AI gov/policy meeting the needs of established research organizations and think tanks (e.g., RAND, GovAI, TFS, IAPS, IFP, etc.)? If not, where can programs like MATS/ERA/etc. best add value?”
“Is there a demand for more organizations like CAIP? If so, what experience do the founders require?”