[Question] What funding sources exist for technical AI safety research?

What funding sources exist? Who are they aimed at—people within academia, independent researchers, early-career researchers, established researchers? What sort of research are they aimed at—MIRI-style deconfusion, ML-style approaches, more theoretical, less theoretical? What quirks do they have, what specific things do they target?

For purposes of this question, I am not interested in funding sources aimed at strategy/​infrastructure/​coordination/​etc, only direct technical safety research.