autonomous weapons are unlikely to directly contribute to existential risk
I disagree, and would’ve liked to see this argued for.
Perhaps the disagreement is at least somewhat about what we mean by “directly contribute”.
Autonomous weapons seem like one of the areas where competition is most likely to drive actors to sacrifice existential safety for performance. This is because the stakes are extremely high, quick response time seems very valuable (meaning having a human in the loop becomes costly) and international agreements around safety seem hard to imagine without massive geopolitical changes.
That was part of the summary; it wasn’t part of the opinion. While I haven’t read the transcript myself, I just looked and found this quote:
The stakes for autonomous weapons might be big, but are certainly not existential. I think in any reasonable interpretation of autonomous weapons might do really, unless you start thinking about autonomy wired into, like nuclear launch decisions which is basically nuts. And I don’t think it’s really what’s on the table for realistically what people might be worried about.
Without having read the transcript either, this sounds like it’s focused on near-term issues with autonomous weapons, and not meant to be a statement about the longer-term role autonomous weapons systems might play in increasing X-risk.
I disagree, and would’ve liked to see this argued for.
Perhaps the disagreement is at least somewhat about what we mean by “directly contribute”.
Autonomous weapons seem like one of the areas where competition is most likely to drive actors to sacrifice existential safety for performance. This is because the stakes are extremely high, quick response time seems very valuable (meaning having a human in the loop becomes costly) and international agreements around safety seem hard to imagine without massive geopolitical changes.
That was part of the summary; it wasn’t part of the opinion. While I haven’t read the transcript myself, I just looked and found this quote:
Without having read the transcript either, this sounds like it’s focused on near-term issues with autonomous weapons, and not meant to be a statement about the longer-term role autonomous weapons systems might play in increasing X-risk.
Yeah, agreed, but presumably that’s what the entire podcast is about.