This strikes me as a difficult-to-solve issue. In the past, LessWrong has generally leaned in the direction of “democratic governments are a less-imperfect means of governing AI development than private companies”. Now, a categorical ban on the use of LLMs in a military context seems like an obviously-good thing, but that’s not what’s being discussed here. There are plenty of companies in America that are actively on board with the use of LLMs by the military, and certainly the same is true in other countries.
Is there any precedent for a major corporation producing a defense-priority good which is believed by those in power to be essential to the next generation of warfare outright refusing to work with its national military? I’d be surprised, if so.
On a more strategic note, I think this makes clear something I’ve been saying for a while, which is that it was a severe mistake for AI safetyists to allow their key issue to be co-opted by partisan interests. If Anthropic had quickly and publicly addressed things like this and this, they might have been able to make a bipartisan push in the direction of opposing weaponization of their products, which, in a vacuum, would be broadly popular. Instead, to a lot of people, it looks like “This company talks about safety, but pretty clearly hates us. Now they think they have the right to overrule the government on matters of life and death? They’re dangerous!”.
This outcome was entirely avoidable, and it will very likely have severe consequences for a lot of people.
This strikes me as a difficult-to-solve issue. In the past, LessWrong has generally leaned in the direction of “democratic governments are a less-imperfect means of governing AI development than private companies”. Now, a categorical ban on the use of LLMs in a military context seems like an obviously-good thing, but that’s not what’s being discussed here. There are plenty of companies in America that are actively on board with the use of LLMs by the military, and certainly the same is true in other countries.
Is there any precedent for a major corporation producing a defense-priority good which is believed by those in power to be essential to the next generation of warfare outright refusing to work with its national military? I’d be surprised, if so.
On a more strategic note, I think this makes clear something I’ve been saying for a while, which is that it was a severe mistake for AI safetyists to allow their key issue to be co-opted by partisan interests. If Anthropic had quickly and publicly addressed things like this and this, they might have been able to make a bipartisan push in the direction of opposing weaponization of their products, which, in a vacuum, would be broadly popular. Instead, to a lot of people, it looks like “This company talks about safety, but pretty clearly hates us. Now they think they have the right to overrule the government on matters of life and death? They’re dangerous!”.
This outcome was entirely avoidable, and it will very likely have severe consequences for a lot of people.