My guess is that making small teams consisting of a skilled mathematician, a skilled programmer, a skilled ML theorist, and a skilled manager would be a good way to make progress. Make a hundred or a thousand such groups, in the assumption that maybe 1% of them will pay off.
I think this is a good idea, but doesn’t quite feel like an answer to the question (at least as I understood it). i.e. “get a bunch of serial researchers working in parallel, hope one of them succeeds”, which I think So8res articulated in AI alignment researchers don’t (seem to) stack.
I do think small teams with a few different skillsets working together is probably a good way to go in many cases. Your comment here reminds me of Wentworth’s team structure in MATS Models, although that only had 3 people.
Yeah, so, my experience from working in academia says to me that the odds of finding two researchers with a similar frame on a novel problem and good social chemistry such that they add to each other’s productivity is something like between 1⁄200 and 1/1000, even after filtering for ‘competent researchers interested in the general topic’. So I’m not at all surprised that the results of getting about 10 new researchers working on alignment has not found a match yet.
From my experience working in industry, I think that a big failing of the attempts I’ve seen at organizing research groups is undervaluing a good manager. Having someone who is ‘people-oriented’ to coach and coordinate is important for preventing burnout, and for keeping several ‘research-oriented’ people focused on working together on a given task instead of wandering off in different directions.
Also, I’m hopeful that a separate approach of deliberately ‘cyborg’-ing researchers by getting them proficient at and using the latest SoTA models, and making SoTA models specifically fine-tuned for the purpose of assisting in research could help speed up individual researchers. Maybe having the AI able to do the research all on its own means it’s already too dangerous, but I don’t think that that holds for ‘useful enough to be a good tool’.
My guess is that making small teams consisting of a skilled mathematician, a skilled programmer, a skilled ML theorist, and a skilled manager would be a good way to make progress. Make a hundred or a thousand such groups, in the assumption that maybe 1% of them will pay off.
I think this is a good idea, but doesn’t quite feel like an answer to the question (at least as I understood it). i.e. “get a bunch of serial researchers working in parallel, hope one of them succeeds”, which I think So8res articulated in AI alignment researchers don’t (seem to) stack.
I do think small teams with a few different skillsets working together is probably a good way to go in many cases. Your comment here reminds me of Wentworth’s team structure in MATS Models, although that only had 3 people.
Yeah, so, my experience from working in academia says to me that the odds of finding two researchers with a similar frame on a novel problem and good social chemistry such that they add to each other’s productivity is something like between 1⁄200 and 1/1000, even after filtering for ‘competent researchers interested in the general topic’. So I’m not at all surprised that the results of getting about 10 new researchers working on alignment has not found a match yet.
From my experience working in industry, I think that a big failing of the attempts I’ve seen at organizing research groups is undervaluing a good manager. Having someone who is ‘people-oriented’ to coach and coordinate is important for preventing burnout, and for keeping several ‘research-oriented’ people focused on working together on a given task instead of wandering off in different directions.
Also, I’m hopeful that a separate approach of deliberately ‘cyborg’-ing researchers by getting them proficient at and using the latest SoTA models, and making SoTA models specifically fine-tuned for the purpose of assisting in research could help speed up individual researchers. Maybe having the AI able to do the research all on its own means it’s already too dangerous, but I don’t think that that holds for ‘useful enough to be a good tool’.