Most of your current categories focus on technology, but this article focusses on safety, on the nature of our self-destruction/warfare, and explores what is needed technically from AI to solve it. It sees caste systems that predate AI, notes that they are dangerous (perhaps increasingly dangerous) and how to adjust AI designs and evaluation processes accordingly.
Perhaps the title of the agenda could be “Understand safety” or “Understand ourselves”, and the increase in social impact research could reflect here.
AI threatens to orchestrate sustainable social reform would belong in the 2025 review, but may I suggest adding a new kind of agenda to your taxonomy for 2025?
Most of your current categories focus on technology, but this article focusses on safety, on the nature of our self-destruction/warfare, and explores what is needed technically from AI to solve it. It sees caste systems that predate AI, notes that they are dangerous (perhaps increasingly dangerous) and how to adjust AI designs and evaluation processes accordingly.
Perhaps the title of the agenda could be “Understand safety” or “Understand ourselves”, and the increase in social impact research could reflect here.