My own background is in academic social science and national security, for whatever that’s worth
Why should we assume the AI wants to survive? If it does, then what exactly wants to survive?
...
Why should we assume that the AI has boundless, coherent drives?
Are you familiar with the “realist” school of international relations, and in particular their theoretical underpinnings?
If so, I think it’d be helpful to consider Yudkowsky and Soares’s arguments in that light. In particular, how closely does the international order for emerging superintelligences look like the anarchic international order for realist states? What are the weaknesses of the realist school of analysis, and do they apply to AIs?
Are you familiar with the “realist” school of international relations, and in particular their theoretical underpinnings?
If so, I think it’d be helpful to consider Yudkowsky and Soares’s arguments in that light. In particular, how closely does the international order for emerging superintelligences look like the anarchic international order for realist states? What are the weaknesses of the realist school of analysis, and do they apply to AIs?