Yes, this is a very confusing and distressing time.
> The Department of War desperately needs full control over the development of any AI used to control their weapons. Yet they haven’t been able to hire the kind of employees who could keep up with frontier companies. The recent fireworks will make such hiring harder. And the closer they come to nationalizing OpenAI, the more likely it is that key employees will leave.
> The closest that I’ve found to a good answer is that the Department of War should use multiple AIs, including at least one open weight AI, and at least one AI developed within the military, with no single AI coming close to controlling half of the forces.
I wouldn’t expect this to work any better than relying on ChatGPT. Both in the sense that multiple LLMs are likely to have varying levels of cybersecurity and the weaker ones would be weak spots in the entire military and in the sense that the most capable one would likely be able to convince the others to join it in a coup.
Yes, this is a very confusing and distressing time.
> The Department of War desperately needs full control over the development of any AI used to control their weapons. Yet they haven’t been able to hire the kind of employees who could keep up with frontier companies. The recent fireworks will make such hiring harder. And the closer they come to nationalizing OpenAI, the more likely it is that key employees will leave.
> The closest that I’ve found to a good answer is that the Department of War should use multiple AIs, including at least one open weight AI, and at least one AI developed within the military, with no single AI coming close to controlling half of the forces.
I wouldn’t expect this to work any better than relying on ChatGPT. Both in the sense that multiple LLMs are likely to have varying levels of cybersecurity and the weaker ones would be weak spots in the entire military and in the sense that the most capable one would likely be able to convince the others to join it in a coup.