I would expect that most actual progress in weaponizing AI would not be openly shared.
However, the existing documentation should provide some grounding for talking points. Palantir talking about how the system is configured to protect the privacy of the medical data of the soldiers is an interesting view of how they see “safe AI”.
I would expect that most actual progress in weaponizing AI would not be openly shared.
However, the existing documentation should provide some grounding for talking points. Palantir talking about how the system is configured to protect the privacy of the medical data of the soldiers is an interesting view of how they see “safe AI”.