This is a conceivable universe, but do you really think it’s likely? It seems to me much more likely that additional funding opportunities would help AI safety research move at least a little bit faster.
I don’t know enough about the dynamics in academia, or the rate of progress in alignment to be confident in my assessment. But I don’t think it’s <6% something similar to this happens, so if people are introducing the field to mainstream academia, they should take precautions to minimize the chances of effect I described resulting in significant slowdowns.
This is a conceivable universe, but do you really think it’s likely? It seems to me much more likely that additional funding opportunities would help AI safety research move at least a little bit faster.
I don’t know enough about the dynamics in academia, or the rate of progress in alignment to be confident in my assessment. But I don’t think it’s <6% something similar to this happens, so if people are introducing the field to mainstream academia, they should take precautions to minimize the chances of effect I described resulting in significant slowdowns.