Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.
In the case of a gun, you probably know where the bullet will go, and if you don’t, then you probably shouldn’t fire it.
In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don’t activate it when you can’t predict it means you won’t activate it at all.
Okay, that actually seems like quite a good isolation of the correct empirical cluster. Presumably guided missiles fall under the ‘not allowed’ category there, as you don’t know what path they’ll follow under surprising circumstances.
Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.
In the case of a gun, you probably know where the bullet will go, and if you don’t, then you probably shouldn’t fire it.
In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don’t activate it when you can’t predict it means you won’t activate it at all.
Okay, that actually seems like quite a good isolation of the correct empirical cluster. Presumably guided missiles fall under the ‘not allowed’ category there, as you don’t know what path they’ll follow under surprising circumstances.