[Question] How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)?

I haven’t personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on making AI-piloted drones, rockets, information systems (hello, Skynet and AM), etc.

And the most recent and perhaps chilling use of it comes from the Israel’s invasion of Gaza, where Israeli army has marked tens of thousands of Gazans as suspects for assassination, using Lavender AI targeting system with little human oversight and a permissive policy for casualties.

So how does all of it affect your p(doom) and what are your general thoughts on this and how do we counter that?

Relevant links:

https://​​www.972mag.com/​​lavender-ai-israeli-army-gaza/​​

https://​​www.wired.com/​​story/​​anduril-roadrunner-drone/​​

https://​​www.bloomberg.com/​​news/​​articles/​​2024-01-10/​​palantir-supplying-israel-with-new-tools-since-hamas-war-started

No comments.