The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.
But we normally seem to see “one death as a tragedy, a million as a statistic” due to scope insensitivity, availability bias etc.
Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?
During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.
But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.
You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don’t see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.
I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.