[Question] What are the major underlying divisions in AI safety?

I’ve recently been thinking about how different researchers have wildly different conceptions of what needs to be done in order to solve alignment and what projects are net-positive.

I started making a list of core divisions;

  • Empiricism vs conceptual research: Which is more valuable or do we need both?

  • Take-off speeds: How fast will it be?

  • Ultimate capability level: What level of intelligence will AI’s reach? How much of an advantage does this provide them?

  • Offense-defense balance: Which has the advantage?

  • Capabilities externalities: How bad are these?

Are there any obvious ones that I’ve missed?