Here is what David Pearce has to say about the FAQ (via Facebook):
Lucid. But the FAQ (and lesswrong) would be much stronger if it weren’t shot through with anthropocentric bias...
Suggestion: replace “people” with “sentient beings”.
For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler.
Sperm whales are probably more sentient than human toddlers; chickens probably less.
Ethnocentric bias now seems obvious. If the FAQ said “white people” throughout rather than “people”, then such bias would leap off the page—though it wouldn’t to the Victorians.
For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler.
Because, in order to be a rational consequentialist, one needs to forget that human toddlers grow into adult humans and adult pigs grow into… well, adult pigs.
Careful, that leads straight into an abortion debate (via “if you care that much about potential development, how much value do you give a fetus / embryo / zygote?”).
I am aware. If the thought process involved is “We can’t assign values to future states because then we might be opposed to abortion” then I recommend abandoning that process. If the thought process is just “careful, there’s a political schism up ahead” that fails to realize we are already in a political schism about animal rights.
There is more than one way to interpret your original objection and I wonder whether you and NihilCredo are talking about the same thing. Consider two situations:
(1) The toddler and the pig are in mortal danger. You can save only one of them.
(2) The toddler and the pig will both live long lives but they’re about to experience extreme pain. Once again, you can prevent it for only one of them.
I think it’s correct to take future states into consideration in the second case where we know that there will be some suffering in the future and we can minimize it by asking whether humans or pigs are more vulnerable to suffering resulting from past traumas.
But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong. And the logical conclusion of that wouldn’t be opposition to abortion, it would be opposition to anything that isn’t efficient procreation.
But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong.
Why throw away that information? Because it’s about the future?
I don’t know how to derive my impression from first principles. So the answer has to be: because my moral intuitions tell me to do so. But they only tell me so in this particular case—I don’t have a general rule of disregarding future information.
Ok. I will try to articulate my reasoning, and see if that helps clarify your moral intuitions: a “life” is an ill-defined concept, compared to a “lifespan.” So when we have to choose one of two individuals, the way our choice changes the future depends on the lifespans involved. If the choice is between saving someone 10 years old with 70 years left or someone 70 years old with 10 years left, then one choice results in 60 more years of aliveness than the other! (Obviously, aliveness is not the only thing we care about, but this is good enough for a first approximation to illustrate the idea.)
And so the state between now and the next second (i.e. the current mind-state) is just a rounding error when you look at the change to the whole future; in the future of the human toddler it is mostly not a human toddler, whereas in the future of the adult pig it is mostly an adult pig. If we prefer adult humans to adult pigs, and we know that adult pigs have a 0% chance of becoming adult humans and human toddlers have a ~98% chance of becoming adult humans, then combining those pieces of knowledge gives us a clear choice.
If this is not a general principle, it may be worthwhile to try and tease out what’s special about this case, and why that seems special. It may be that this is a meme that’s detached from its justification, and that you should excise it, or that there is a worthwhile principle here you should apply in other cases.
I meant the latter. Your assessment is correct, although the mind-killing ability of a real-life debate (prenatal abortion y/n) is significantly higher than that of a largely hypothetical debate (equalising the rights of toddlers and smart animals).
If the FAQ said “white people” throughout rather than “people”, then such bias would leap off the page—though it wouldn’t to the Victorians.
What would leap off the page is the ‘white people’ phrase. Making that explicit would be redundant and jarring. Perhaps even insulting. It should have been clear what ‘people’ meant without specifying color.
I can’t even use nonstandard pronouns without it impeding readability, so I think I’m going to sacrifice precision and correctness for the sake of ease-of-understanding here.
“People” need not mean “humans”, it can mean “people”.
Also, people should really stop using the word “sentient”. It’s a useless word that seems to serve no purpose beyond causing people to get intelligence and consciousness confused. (OK, Pearce does seem pretty clear on what he means here; he doesn’t seem to have been confused by it himself. Nonetheless, it’s still aiding in the confusion of others.)
Here is what David Pearce has to say about the FAQ (via Facebook):
Because, in order to be a rational consequentialist, one needs to forget that human toddlers grow into adult humans and adult pigs grow into… well, adult pigs.
Careful, that leads straight into an abortion debate (via “if you care that much about potential development, how much value do you give a fetus / embryo / zygote?”).
I am aware. If the thought process involved is “We can’t assign values to future states because then we might be opposed to abortion” then I recommend abandoning that process. If the thought process is just “careful, there’s a political schism up ahead” that fails to realize we are already in a political schism about animal rights.
There is more than one way to interpret your original objection and I wonder whether you and NihilCredo are talking about the same thing. Consider two situations: (1) The toddler and the pig are in mortal danger. You can save only one of them. (2) The toddler and the pig will both live long lives but they’re about to experience extreme pain. Once again, you can prevent it for only one of them.
I think it’s correct to take future states into consideration in the second case where we know that there will be some suffering in the future and we can minimize it by asking whether humans or pigs are more vulnerable to suffering resulting from past traumas.
But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong. And the logical conclusion of that wouldn’t be opposition to abortion, it would be opposition to anything that isn’t efficient procreation.
Why throw away that information? Because it’s about the future?
I don’t know how to derive my impression from first principles. So the answer has to be: because my moral intuitions tell me to do so. But they only tell me so in this particular case—I don’t have a general rule of disregarding future information.
Ok. I will try to articulate my reasoning, and see if that helps clarify your moral intuitions: a “life” is an ill-defined concept, compared to a “lifespan.” So when we have to choose one of two individuals, the way our choice changes the future depends on the lifespans involved. If the choice is between saving someone 10 years old with 70 years left or someone 70 years old with 10 years left, then one choice results in 60 more years of aliveness than the other! (Obviously, aliveness is not the only thing we care about, but this is good enough for a first approximation to illustrate the idea.)
And so the state between now and the next second (i.e. the current mind-state) is just a rounding error when you look at the change to the whole future; in the future of the human toddler it is mostly not a human toddler, whereas in the future of the adult pig it is mostly an adult pig. If we prefer adult humans to adult pigs, and we know that adult pigs have a 0% chance of becoming adult humans and human toddlers have a ~98% chance of becoming adult humans, then combining those pieces of knowledge gives us a clear choice.
If this is not a general principle, it may be worthwhile to try and tease out what’s special about this case, and why that seems special. It may be that this is a meme that’s detached from its justification, and that you should excise it, or that there is a worthwhile principle here you should apply in other cases.
I meant the latter. Your assessment is correct, although the mind-killing ability of a real-life debate (prenatal abortion y/n) is significantly higher than that of a largely hypothetical debate (equalising the rights of toddlers and smart animals).
What would leap off the page is the ‘white people’ phrase. Making that explicit would be redundant and jarring. Perhaps even insulting. It should have been clear what ‘people’ meant without specifying color.
In fact, adult pigs are of more concern than <2 year old toddlers; they pass a modified version of the mirror-test and thus seem to be selfconscious. http://en.wikipedia.org/wiki/Pigs#cite_note-AnimalBehaviour-10
I can’t even use nonstandard pronouns without it impeding readability, so I think I’m going to sacrifice precision and correctness for the sake of ease-of-understanding here.
“People” need not mean “humans”, it can mean “people”.
Also, people should really stop using the word “sentient”. It’s a useless word that seems to serve no purpose beyond causing people to get intelligence and consciousness confused. (OK, Pearce does seem pretty clear on what he means here; he doesn’t seem to have been confused by it himself. Nonetheless, it’s still aiding in the confusion of others.)
Now we know Clippy’s true identity!
(I kid, I kid. Thinking correctly about morality applied to non-human sentient beings is a Tough Problem)