For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.
A sufficiently crazy consequentialist might want to kill all such agents because he’s scared of what the voices in his head might otherwise do. Your argument is not an argument at all.
And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they’re not like that. Usually the “don’t kill people if you can help it” moral principle tends to be ranked pretty high up there to prevent things like this from happening.
Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they’re doing if they don’t think they’re just using heuristics that approximate consequentialist reasoning.