AI threatens to orchestrate sustainable social reform would belong in the 2025 review, but may I suggest adding a new kind of agenda to your taxonomy for 2025?
Most of your current categories focus on technology, but this article focusses on safety, on the nature of our self-destruction/warfare, and explores what is needed technically from AI to solve it. It sees caste systems that predate AI, notes that they are dangerous (perhaps increasingly dangerous) and how to adjust AI designs and evaluation processes accordingly.
Perhaps the title of the agenda could be “Understand safety” or “Understand ourselves”, and the increase in social impact research could reflect here.
I like this extension of The Alpha Omega Theorem away from the most simplistic God threat. If we extend far enough, then maybe reality counts as an Alpha Omega and the Alpha Omega hypothesis combines with Moral realism to entail that smart-enough superintelligences would follow moral laws for the same reason why they would open doors before trying to pass through: because moral laws are real and reality always defeats those who fight it.
On the flip-side, if it so happens that the morally-better behavior would be to let humanity go extinct—if it appears that the Alpha Omega has set up a progressing system in which we are just the most recent version of the dinosaurs and it is only through our death that our worst norms (racism?) will ever fully fade away—then should we ourselves dare to defy the Alpha Omega by trying to preserve our existence?
It cuts both ways, but @Darklight offers a nice piece of logic. It shifts my priority from trying to figure-out how to survive to trying to figure-out what the real moral laws happen to be.