Counterintuitively, I kind of hope Palantir does make progress in weaponizing AI. I think that that’s a good way to get the government and general populace to take AI risks more seriously, but doesn’t actually advance the Pareto frontier of superintelligent AGI and its concomitant existential risks. My experience with talking with non-technical friends and family about AI risk is that ‘Robots with guns’ is a much easier risk for them to grasp than non-embodied superintelligent schemer.
I would expect that most actual progress in weaponizing AI would not be openly shared.
However, the existing documentation should provide some grounding for talking points. Palantir talking about how the system is configured to protect the privacy of the medical data of the soldiers is an interesting view of how they see “safe AI”.
Galaxy-brain, pro e/acc take: advance capabilities fast enough that people freak out and we create a crises that enables sufficient coordination to avoid existential catastrophe
To what extent would you expect the government’s or general populace’s responses to “Robots with guns” to be helpful (or harmful) for mitigating risks from superintelligence? (Would getting them worried about robots actually help with x-risks?)
Counterintuitively, I kind of hope Palantir does make progress in weaponizing AI. I think that that’s a good way to get the government and general populace to take AI risks more seriously, but doesn’t actually advance the Pareto frontier of superintelligent AGI and its concomitant existential risks. My experience with talking with non-technical friends and family about AI risk is that ‘Robots with guns’ is a much easier risk for them to grasp than non-embodied superintelligent schemer.
I would expect that most actual progress in weaponizing AI would not be openly shared.
However, the existing documentation should provide some grounding for talking points. Palantir talking about how the system is configured to protect the privacy of the medical data of the soldiers is an interesting view of how they see “safe AI”.
Galaxy-brain, pro e/acc take: advance capabilities fast enough that people freak out and we create a crises that enables sufficient coordination to avoid existential catastrophe
To what extent would you expect the government’s or general populace’s responses to “Robots with guns” to be helpful (or harmful) for mitigating risks from superintelligence? (Would getting them worried about robots actually help with x-risks?)