Are you saying that some consequentialist systems don’t even have deontological approximations?
It seems like you can have rules of the form “Don’t torture… unless by doing the torture you can prevent an even worse thing” provides a checklist to compare badness …so I’m not convinced?
Are you saying that some consequentialist systems don’t even have deontological approximations?
It seems like you can have rules of the form “Don’t torture… unless by doing the torture you can prevent an even worse thing” provides a checklist to compare badness …so I’m not convinced?
Actually, this one is trivially true, with the rule being “maximize the relevant utility”. I am saying the converse need not be true.