I basically agree with John Wentworth here that it affects p(doom) not at all, but one thing I will say is that it kind of makes claims that humans will make decisions/be accountable once AI gets very useful rather uncredible.
More generally, one takeaway I see from the military’s use of AI is that there are strong pressures to let them operate on their own, and this is going to be surprisingly important in the future.
I basically agree with John Wentworth here that it affects p(doom) not at all, but one thing I will say is that it kind of makes claims that humans will make decisions/be accountable once AI gets very useful rather uncredible.
More generally, one takeaway I see from the military’s use of AI is that there are strong pressures to let them operate on their own, and this is going to be surprisingly important in the future.