Let D be some objective measure of distance (probably to do with Kologomorov complexity) between individuals. Let M be my moral measure of distance, and assume the cut-off is 1.
Then I would set M(a,b) = D(a,b) whenever D(a,b) < 1, and M(a,b) = 1 whenever D(a,b) >= 1. The discontinuity is in the derivative, not the value.
That doesn’t resolve quanticle’s objection. Your cutoff still suggests that a reasonably individualistic human is just as valuable as, say, the only intelligent alien being in the universe. Would you agree with that conclusion?
No. I grant special status to exceedingly unique minds, and to the last few of a given species.
But human minds are very similar to each other, and granting different moral status to different humans is a very dangerous game. Here, I am looking at the practical effects of moral systems (Eliezer’s post on “running on corrupted hardware” is relevant). The thoeretical gains of treating humans as having varrying moral status are small; the practical risks are huge (especially as our societies, though cash, reputation and other factors, is pretty good at distinguishing between people without having to further grant them different moral status).
One cannot argue: “I agree with moral system M, but M has consequence S, and I disagree with S”. Hence I cannot agree with granting people different moral status, once they are sufficiently divergent.
Didn’t phrase clearly what I meant by cut-off.
Let D be some objective measure of distance (probably to do with Kologomorov complexity) between individuals. Let M be my moral measure of distance, and assume the cut-off is 1.
Then I would set M(a,b) = D(a,b) whenever D(a,b) < 1, and M(a,b) = 1 whenever D(a,b) >= 1. The discontinuity is in the derivative, not the value.
That doesn’t resolve quanticle’s objection. Your cutoff still suggests that a reasonably individualistic human is just as valuable as, say, the only intelligent alien being in the universe. Would you agree with that conclusion?
No. I grant special status to exceedingly unique minds, and to the last few of a given species.
But human minds are very similar to each other, and granting different moral status to different humans is a very dangerous game. Here, I am looking at the practical effects of moral systems (Eliezer’s post on “running on corrupted hardware” is relevant). The thoeretical gains of treating humans as having varrying moral status are small; the practical risks are huge (especially as our societies, though cash, reputation and other factors, is pretty good at distinguishing between people without having to further grant them different moral status).
One cannot argue: “I agree with moral system M, but M has consequence S, and I disagree with S”. Hence I cannot agree with granting people different moral status, once they are sufficiently divergent.