An intelligent lethal machine is one which chooses and attacks a target using hardware and software specialized for the task of identifying and killing humans.
Clearly, there is a spectrum of intelligence. We should define a limit on how much intelligence we are willing to build into machines which are primarily designed to destroy us humans and our habitat.
Though militaries take more thorough precautions than most organizations, there are many historical examples of militaries suffering defeat, which, with better planning, could have been avoided.
An LLM like GPT which hypothetically escaped its safety mechanisms is limited in the amount of damage it could do, based on what systems it could compromise. The most dangerous rogue AI is one that could gain unauthorized access to military hardware. The more intelligent that hardware, the more damage a rogue AI could cause with it before being eliminated. In the worst case, the rogue AI would use that military hardware to cause a complete societal collapse.
Once countries adopt weaponry, they resist giving it up, though it would be in the better interests of the global community. There are some places we’ve made progress. However, with enough foresight, we (the global community) could plan ahead by placing limits on intelligent lethal machines sooner, rather than later.
Items of response:
An intelligent lethal machine is one which chooses and attacks a target using hardware and software specialized for the task of identifying and killing humans.
Clearly, there is a spectrum of intelligence. We should define a limit on how much intelligence we are willing to build into machines which are primarily designed to destroy us humans and our habitat.
Though militaries take more thorough precautions than most organizations, there are many historical examples of militaries suffering defeat, which, with better planning, could have been avoided.
An LLM like GPT which hypothetically escaped its safety mechanisms is limited in the amount of damage it could do, based on what systems it could compromise. The most dangerous rogue AI is one that could gain unauthorized access to military hardware. The more intelligent that hardware, the more damage a rogue AI could cause with it before being eliminated. In the worst case, the rogue AI would use that military hardware to cause a complete societal collapse.
Once countries adopt weaponry, they resist giving it up, though it would be in the better interests of the global community. There are some places we’ve made progress. However, with enough foresight, we (the global community) could plan ahead by placing limits on intelligent lethal machines sooner, rather than later.