I suspect that the undervaluing of field-building is downstream of EA overupdating on The Meta Trap (I appreciated points 1 & 5; point 2 probably looks worst in retrospect).
I don’t know if founding is still undervalued—seems like there’s a lot in the space these days.
”I confess that I don’t really understand this concern”
Have you heard of Eternal September? If a field/group/movement grows at less than a certain rate, then there’s time for new folks to absorb the existing culture/knowledge/strategic takes and then pass it on to the folks after them. However, this breaks down if the growth happens too fast.
”We should be careful not to dilute the quality of the field by scaling too fast… If outreach funnels attract a large number of low-caliber talent to AI safety, we can enforce high standards for research grants and second-stage programs like ARENA and MATS. If forums like LessWrong or the EA Forum become overcrowded with low-calibre posts, we can adjust content moderation or the effect of karma on visibility.”
Firstly, filtering/selection time isn’t free. It takes money, time from high-skilled people and also increases the chance of good candidates being overlooked in the sea of applications since it forces you to filter more aggressively.
Secondly, people need high-quality peers in order to develop intellectually. Even if second-stage programs manage to avoid being diluted, adding a bunch of low-caliber talent to local community groups would make it harder for people to develop intellectually before reaching the second-stage programs; in other words it’d undercut the talent development pipeline for these later stage programs.
“Additionally, growing the AI safety field is far from guaranteed to reduce the average quality of research, as most smart people are not working on AI safety and, until recently, AI safety had poor academic legibility. Even if growing the field reduces the average researcher quality, I expect this will result in more net impact”
I suspect AI safety research is very heavy-tailed and what would encourage the best folks to enter the field is not so much the field being large so much as the field having a high densitiy of talent.
I suspect that the undervaluing of field-building is downstream of EA overupdating on The Meta Trap (I appreciated points 1 & 5; point 2 probably looks worst in retrospect).
I don’t know if founding is still undervalued—seems like there’s a lot in the space these days.
”I confess that I don’t really understand this concern”
Have you heard of Eternal September? If a field/group/movement grows at less than a certain rate, then there’s time for new folks to absorb the existing culture/knowledge/strategic takes and then pass it on to the folks after them. However, this breaks down if the growth happens too fast.
”We should be careful not to dilute the quality of the field by scaling too fast… If outreach funnels attract a large number of low-caliber talent to AI safety, we can enforce high standards for research grants and second-stage programs like ARENA and MATS. If forums like LessWrong or the EA Forum become overcrowded with low-calibre posts, we can adjust content moderation or the effect of karma on visibility.”
Firstly, filtering/selection time isn’t free. It takes money, time from high-skilled people and also increases the chance of good candidates being overlooked in the sea of applications since it forces you to filter more aggressively.
Secondly, people need high-quality peers in order to develop intellectually. Even if second-stage programs manage to avoid being diluted, adding a bunch of low-caliber talent to local community groups would make it harder for people to develop intellectually before reaching the second-stage programs; in other words it’d undercut the talent development pipeline for these later stage programs.
“Additionally, growing the AI safety field is far from guaranteed to reduce the average quality of research, as most smart people are not working on AI safety and, until recently, AI safety had poor academic legibility. Even if growing the field reduces the average researcher quality, I expect this will result in more net impact”
I suspect AI safety research is very heavy-tailed and what would encourage the best folks to enter the field is not so much the field being large so much as the field having a high densitiy of talent.