If you want to solve AI safety before AI capabilities become too great, then it seems that AI safety must have some of the following:
More researchers
Better researchers
Less necessary insights
Easier necessary insights
Ability to steal insights from AI capability research more than the reverse.
...
Is this likely to be the case? Why? Another way to ask this question is: Under which scenarios doesn’t aligning add time?
[Question] What are the relative speeds of AI capabilities and AI safety?
If you want to solve AI safety before AI capabilities become too great, then it seems that AI safety must have some of the following:
More researchers
Better researchers
Less necessary insights
Easier necessary insights
Ability to steal insights from AI capability research more than the reverse.
...
Is this likely to be the case? Why? Another way to ask this question is: Under which scenarios doesn’t aligning add time?