I believe the only answer to the question “how should humans solve the alignment problem” is this: we should simply make ourselves smarter first; if we do build AGI, we should always ensure it is far less intelligent than us.
Hence, the problem is avoided with this maxim: simply always be smarter than the things you build.
AI models for autonomous weapons are quite different from off-the-shelf LLMs.
Question: Is Claude only being used as a chatbot/research agent at the Pentagon? Or is there some intent to connect it to APIs for conducting mass surveillance or operating autonomous weapons? Is there some project to embed Claude in military robotic systems, like Project Fetch or something similar?
The article says it’s used mostly for bureaucratic functions, so this seems unlikely. Is there something classified we don’t know about? Or is this just another culture war issue, i.e. Claude is too “woke” for the Pentagon?