FWIW this doesn’t seem right to me. Indeed, working at labs seems to have caused many people previously doing AI Alignment research to now do work that seems basically just capabilities work. Many people at academic labs also tend to go off into capabilities work, or start chasing academic prestige in ways that seems to destroy most possible value from their research.
Average output from independent researchers or very small research organizations seems where most of the best work comes from (especially if you include things like present-day Redwood and ARC, which are like teams of 3-4 people). Many people do fail to find traction, whereas organizations tend to be able to elicit more reliable output from whoever they hire, but honestly a large fraction of that output seems net-negative to me and seems to be the result of people just being funneled into ML engineering work when they lose traction on hard research problems.
As an added datapoint, I know of an IMO promising researcher who is now at a lab and is working on a write-up with the goal of persuading other people in the lab about what sort of research is important. This is better than doing capabilities work but is not the object-level research they were previously working on that seemed promising to me.
FWIW this doesn’t seem right to me. Indeed, working at labs seems to have caused many people previously doing AI Alignment research to now do work that seems basically just capabilities work. Many people at academic labs also tend to go off into capabilities work, or start chasing academic prestige in ways that seems to destroy most possible value from their research.
Average output from independent researchers or very small research organizations seems where most of the best work comes from (especially if you include things like present-day Redwood and ARC, which are like teams of 3-4 people). Many people do fail to find traction, whereas organizations tend to be able to elicit more reliable output from whoever they hire, but honestly a large fraction of that output seems net-negative to me and seems to be the result of people just being funneled into ML engineering work when they lose traction on hard research problems.
As an added datapoint, I know of an IMO promising researcher who is now at a lab and is working on a write-up with the goal of persuading other people in the lab about what sort of research is important. This is better than doing capabilities work but is not the object-level research they were previously working on that seemed promising to me.