No professors were actively interested in the topic, and programs like SPAR, which we helped build, would quickly saturate with applicants. Currently, we are experimenting with a promising system with Georgia Tech faculty as mentors and experienced organizers as research managers.
At UChicago we use experienced organizers as research managers and have found this to be overall successful. Outside mentorship is typically still required but this is just the cherry on the top and doesn’t require a large time commitment from those outside people.
I believe SPAR is a really good resource but is becoming increasingly competitive. For very junior level people (e.g., impressive first year college students with no research experience) there is a lowish probability that they will be accepted to SPAR but this presents a really good opportunity for University groups to step in.
We will spend less time upskilling new undergraduates who will get those skills from other places soon anyway.
I believe that one of the highest impact things that UChicago’s group does is give these “very junior level” people their first research experience. This can shorten the timeframe that these students will be qualified to join an AI safety org by ~1 year but the total amount of time from “very junior level” to “at an org that does good research” is still probably 3+ years. This does fall out of “AI 2027″ timelines but because university groups have much more leverage in a longer timeline world (5+ years) I think that this makes quite a bit of sense.
(Full disclosure I’m quite biased on the final point because I give those short timelines a much lower probability than most other organizers—both at UChicago and in general. On some level I suspect this is why I come up with arguments for why it makes sense to care about longer timelines even if you believe in short ones. In general, I still tend to think that students who are most serious about making impact in a short timeline world drop out of college—there have been three students at UChicago who have done this—and for students that remain, it makes sense to think about a 5+ year world.)
More on giving undergrads their first research experience. Yes, giving first research experience is high impact, but we want to reserve these opportunities to the best people. Often, this first research experience is most fruitful when they work with a highly competent team. We are turning focus to assemble such teams and find fits for the most value aligned undergrads.
We always find it hard to form pipelines because individuals are just so different! I don’t even feel comfortable using ‘undergrad’ as a label if I’m honest…
Thanks for the insights! The structured research program at UChicago you describe is exactly what we’re trying to do now, but it’s too soon to say whether or not it’s working.
I think the idea of giving people their first research experience can be valuable, although we tend to find the students who are attracted to these opportunities already have/are getting research experience, even if they are at a “very junior level.” Something like <5% of new undergraduates that self-select into wanting to do research with us don’t already have exposure to technical research, so I’m unsure how valuable that as a goal is.
I think what’s high-impact here is getting people to their first published work related to AI safety, which can dramatically shorten timelines to FTE work.
At UChicago we use experienced organizers as research managers and have found this to be overall successful. Outside mentorship is typically still required but this is just the cherry on the top and doesn’t require a large time commitment from those outside people.
I believe SPAR is a really good resource but is becoming increasingly competitive. For very junior level people (e.g., impressive first year college students with no research experience) there is a lowish probability that they will be accepted to SPAR but this presents a really good opportunity for University groups to step in.
I believe that one of the highest impact things that UChicago’s group does is give these “very junior level” people their first research experience. This can shorten the timeframe that these students will be qualified to join an AI safety org by ~1 year but the total amount of time from “very junior level” to “at an org that does good research” is still probably 3+ years. This does fall out of “AI 2027″ timelines but because university groups have much more leverage in a longer timeline world (5+ years) I think that this makes quite a bit of sense.
(Full disclosure I’m quite biased on the final point because I give those short timelines a much lower probability than most other organizers—both at UChicago and in general. On some level I suspect this is why I come up with arguments for why it makes sense to care about longer timelines even if you believe in short ones. In general, I still tend to think that students who are most serious about making impact in a short timeline world drop out of college—there have been three students at UChicago who have done this—and for students that remain, it makes sense to think about a 5+ year world.)
More on giving undergrads their first research experience. Yes, giving first research experience is high impact, but we want to reserve these opportunities to the best people. Often, this first research experience is most fruitful when they work with a highly competent team. We are turning focus to assemble such teams and find fits for the most value aligned undergrads.
We always find it hard to form pipelines because individuals are just so different! I don’t even feel comfortable using ‘undergrad’ as a label if I’m honest…
Thanks for the insights! The structured research program at UChicago you describe is exactly what we’re trying to do now, but it’s too soon to say whether or not it’s working.
I think the idea of giving people their first research experience can be valuable, although we tend to find the students who are attracted to these opportunities already have/are getting research experience, even if they are at a “very junior level.” Something like <5% of new undergraduates that self-select into wanting to do research with us don’t already have exposure to technical research, so I’m unsure how valuable that as a goal is.
I think what’s high-impact here is getting people to their first published work related to AI safety, which can dramatically shorten timelines to FTE work.