Thanks for these explanations– I think they’re reasonable & insightful. A few thoughts:
Most of the scholars in this cohort were working on research agendas for which there are world-leading teams based at scaling labs
I suspect there’s probably some bidirectional causality here. People want to work at scaling labs because they’re interested in the research that scaling labs are doing, and people want to focus on the research the scaling labs are doing because they want to work at scaling labs.
There seems to be an increasing trend in the AI safety community towards the belief that most useful alignment research will occur at scaling labs
I think this is true among a subset of the AI safety community but I don’t think this characterizes the AI safety community as a whole. For example, another (even stronger IMO) trend in the AI safety community has been towards the belief that policy work & technical governance work is more important than many folks previously expected it to be (see EG Paul joining USAISI, MIRI shifting to technical governance, UKAISI being established, and not to mention the general surge in interest among policymakers).
One perspective on this could be “well, MATS is a technical research program, and we’re adding some governance mentors, so shrug.” Another perspective on this could be “well, it seems like perhaps MATS is shifting more slowly than one might’ve imagined, resulting in a culture/ecosystem/mentor cohort/selection process/fellow cohort that disproportionately wants to join scaling labs.”
RE shifting more slowly or having a disproportionate focus, note that the ERA fellowship has prioritized toward governance and technical governance– 2⁄3 of their fellows will be focused on governance + technical governance projects. I’m not necessarily saying this is what would be best for MATS, but it at least points out that we should be seeing MATS’ focus on incubating “technical researchers that want to work at scaling labs” as something that’s part of its design.
I might be a bit “biased” in that I work in AI policy and my worldview generally suggests that AI policy (as well as technical governance) is extremely neglected. I personally think it’s harder to make the case that giving scaling labs better alignment talent is as neglected– it’s still quite important, but scaling labs are extremely popular & I think their ability to hire (and pay for) top technical talent is much stronger than that of governments.
Anecdotally, scholars seemed generally in favor of careers at an AISI or evals org, but would prefer to continue pursuing their current research agenda
Again, I think my primary response here is something like the research interests of the MATS cohort are a function of the program and its selection process– not an immutable characteristic of the world. The ERA example is a “strong” example of prioritizing people with other interests, but I imagine there are plenty of “weaker” things MATS could be doing to select/prioritize fellows who had an interest in governance & technical governance. (Or put differently, my guess is that there are ways in which the current selection process and mentor pool disproportionately attracts/favors those who are interested in the kinds of topics you mentioned).
If I could wave a magic wand, I would probably have MATS add many more governance & technical governance mentors and shift to something closer to ERA’s breakdown. This would admittedly be a rather big shift for MATS, and perhaps current employees/leaders/funders wouldn’t want to do it. I think it ought to be seriously considered, though, and if I were a MATS exec person or a MATS funder I would probably be pushing for this. Or at least asking some serious questions along the lines of “do we really feel like the most impactful thing a training program could be doing right now is serving as an upskilling program for the scaling labs?” (With all due respect to the importance of getting great people to the scaling labs, acknowledging the importance of technical research at scaling labs, agreeing with some of Neel’s points etc.)
Thanks, Neel! I responded in greater detail to Ryan’s comment but just wanted to note here that I appreciate yours as well & agree with a lot of it.
My main response to this is something like “Given that MATS selects the mentors and selects the fellows, MATS has a lot of influence over what the fellows are interested in. My guess is that MATS’ current mentor pool & selection process overweights interpretability and underweights governance + technical governance, relative to what I think would be ideal.”