What evidence or arguments can you offer to support the claim that “Much of the knowledge described in Luke’s recent post on the cognitive science of rationality would have been impossible to acquire under such a ban”? I agree that much of the knowledge described in that post was gained through testing on chimpanzees. It doesn’t follow, however, that this knowledge could not have been obtained in ways involving no experimentation on those animals.
I don’t quite understand your third point above. Suppose it was true that “Banning chimp testing should thus be done only in conjunction with allowing human testing.” Why are you then opposing the ban on chimp testing, rather than advocating a lift on the ban on human testing? In the absence of further elaboration, your position smacks of status quo bias.
Chimps are morally relevantly similar to human babies and toddlers. Since you defend experimentation on chimps, you should also, I believe, defend experimentation on human babies and toddlers. Do you?
More generally, I think we should be cautious of endorsing the conclusions that we reach by considering the merits of arguments for and against animal experimentation. As humans, we have a deep-seated bias against members of other species. It is likely that this bias has influenced our views on issues involving widespread use of non-human animals. Since we also have a strong tendency to rationalize the views that we find ourselves subscribing to, it seems advisable to correct for this potential source of bias by being extra skeptical of arguments that appear to show that animal experimentation is morally permissible.
You made a statement with undue confidence, and the votes would appear to indicate that at the very least, this large subset is not monitoring this thread.
Utilitarian ethics don’t necessarily imply the moral equivalence of a chimp to a toddler. That’s more a question of personhood criteria, which you can easily express in utilitarian or deontological terms.
I do think LWers would be a lot more likely to make that equivalence, but I suspect that’s more because for various reasons we tend to think of personhood mainly in terms of cognition rather than pattern-matching against appearance and behavior.
They aren’t necessarily related but for lots of reasons animal rights is associated with utilitarianism. In particular, utilitarianism tends to recommend a much lower threshold of intelligence for an animal to be due our moral consideration- since the only requirement is experiencing pleasure/pain or having desires. Personhood is usually an epiphenomenal category in utilitarianism- referring to whatever class of entities we should be morally concerned with. It is often an essential category in deontology and it’s confines much stricter- see Kantianism. Utilitarianism and expanding the sphere of moral concern are historically associated as well- Jeremy Bentham, Peter Singer etc. It is not unreasonably to infer from the popularity of utilitarianism here that animal rights is also popular.
Your reason is a good one too, though. And I’m not speaking from total ignorance here, either. I’ve been around these parts for a few years and I’ve seen plenty of upvoted comments about animal rights and had one or two discussions that bare on the subject. Someone is welcome to make a poll but I don’t really think making the observation is worthy of downvotes.
For what it’s worth, I don’t care that much about animal rights; I think humans mostly care about humans; when they care about animals it’s as a side effect of virtues whose primary purpose is to facilitate cooperation and peace between humans (and caring about animals is a good way of signaling those virtues).
(and I don’t think intelligence and “personhood”, whatever that is, have that much to do with each other.)
when they care about animals it’s as a side effect of virtues whose primary purpose is to facilitate > cooperation and peace between humans (and caring about animals is a good way of signaling
those virtues).
Hmm. Last I checked, I do care about animals, regardless of signalling—indeed, I often take a bit of a signal-hit when people see me help beetles safely across a street, or spend time cosing up to an orangutan through the glass at the zoo (thereby rendering him less-viewable by the other patrons, even though he’s primarily responding to a familiar presence and displays no interest in anyone else).
There’s very little sense of signalling virtue—I’m not a vegan or vegetarian, I don’t adhere to any religion with specific rules about the treatment of animals; I certainly don’t find it to enhance my cooperation with other people (some people admire it, but an awful lot of them find it a bit weird or kooky).
I learned something new in the process of finding you a bit weird and kooky, and thereby no longer do. So, upvoted.
(I wasn’t sure if beetles even had brains, which seemed somehow relevant to their moral standing, so I looked it up- and what do you know, nociception has been demonstrated in insects.)
Yeah, insects have brains. And pain. Many have some degree of personality differentiation, even if the space of possible variance is pretty narrow compared to humans. I certainly can’t prevent most of the insects of the world from experiencing what is, to them, a hideously painful death (and indeed, have sometimes hastened that process for crickets when feeding them to pet mantises), but when I see a little dermestid beetle crawling around where it’ll certainly be hit by a car, my impulse is to save it. To the extent I’m interested in justifying that, it’s that I can make a difference here and now for this organism, and want to do so.
I’m a bit fuzzy about what counts as signalling and what doesn’t, but I think it covers more cases than those involving conscious planning.
But anyway, I’d say you care about about animals because you’re a kind person, but that humans tend to be kind mostly because evolutionarily it’s been a benefit by facilitating cooperation and reciprocation. I don’t know whether evolution just implemented “be kind to everything” instead of just humans because it took less lines of code (kindness to animals as spandrel), or whether kindness to animals was deliberately implemented because of it’s signaling value (it may not be hard-coded, but just learnt as children).
(For what it’s worth, I tend to save small bugs and throw them out of the window instead of killing them, which my wife would prefer. This device is convenient for safely and easily catching bugs, and observing them!)
I have misunderstood your initial comment—it sounded to me like you were saying humans don’t really care about animals, but often find it desireable to signal that they do. Thanks for clarifying!
True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
Along similar lines I was going to propose that it should be considered moral to test on “Low IQ Jocks as soon as they finish High School”. After all they have finished their glory years and are different to me in similar ways to how I am different to a chimpanzee. But I decided not to post because I decided it was dangerous to go anywhere near a space including “different” and “less moral consideration”.
I agree that it’s dangerous, but I think any remotely productive use of this thread is going to have to. If we’re not asking that question, we’re not asking the right questions.
True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I do think chimps and humans are different; but most members of PETA probably believe they are more different than I do. I think you’re reading positions into my post that aren’t there.
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
I advocated alternatively testing on humans like myself.
I apologize, I was focusing on a lot of the comments and missed that you had made that point.
I don’t currently know what the rules are for human testing. I think it should be theoretically possible for humans to submit themselves for whatever testing they want, but I also think that as soon as that market exists, there will be those who attempt to exploit it in ways I’d consider unethical. That’s a complex issue that I don’t have an opinion on yet.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
It is not clear to me how that avoids the issue of including the future.
It avoids the issue of including the future of particular people. Some people care about that, others don’t, but it reduces the range of reasons you might object to the comparison.
From what I know, I personally weight chimps as maybe 1⁄3 times as morally significant as humans. I’m sometimes willing to sacrifice humans to save other humans, and I’d sacrifice a chimp to save about 1⁄3 as many humans. (I’d also sacrifice a human to save 3x as many chimps). This is mostly an intuitive belief. I can imagine myself changing the number to something as low as 1/10th, maybe even as low as 1/100th (I don’t expect to drop it that far).
It’s important to note, though, that I DON’T sacrifice humans on a 1-for-1 trade off without their consent. I don’t want to live in a world where someone can sacrifice me without me having a say in the matter. There may be cases where I’m willing to consent to sacrifice. I’m not sure if I can identify them right now.
There are still circumstances where, while pissed, I’d grudgingly accept that the Mastermind doing the sacrificing was right to do so. (If they had to divert a train that was going to kill a lot of people, for example. Probably more than 5 though). The number of lives saved to be worth it also has to consider how perfect the information is, and the likelihood that the sacrificer isn’t running on damaged hardware.
So theoretically, I’m okay with sacrificing chimps to save arbitrarily large numbers of people, but because the chimps CAN’T consent, I’d have to be willing to sacrifice somewhere between 1⁄3 and 1/10th as many humans to accomplish the same thing.
I read your post and tried to come up with an ‘exchange rate’ of my own, and it was much more difficult to do than I thought it would be before I tried it. I thought that it would be along the lines of thousands/hundreds of thousands of chimps == 1 human, as I couldn’t conceive of letting one human die in exchange for any smaller number of chimps, but then I realized that it would be much easier to think of dead chimps as an opportunity cost, and was just reacting with instinctual revulsion. This is assuming that dead chimps can’t be used (to the same extent) as live chimps to aid in medical research.)
So, what is the current value that we place on the life of a chimp?
If after m (successful) studies each using n chimps, we can save l human lives, then (assuming in worst-case that each study kills n chimps):
(mn)(The value of a chimp life in utilons) = l(The value of a human life in utilons)
So: (mn)/l = The value of a human life/The value of a chimp life
This estimate is going to be higher than in real life, as we don’t kill all the chimps used in a typical study. The difficulty would be in quantifying the number of studies necessary to save a human life, or the number of lives saved by a particular discovery.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
Have you corrected for your estimate of p(chimps are uplifted in the next fifty years)?
Edit: Okay, if it makes a difference I only realized the Planet of the Apes reference after I posted, I was making a serious point about the difference between human toddlers and chimps as it relates to the possibility of future personhood.
I hadn’t considered the possibility that chimps could/would be uplifted in the near future (50 years or mean chimp lifetime is a good rule of thumb); I think it’s entirely possible that the technology would be there, but I don’t understand the motivation for wanting to uplift chimps. I guess the reasoning is that more sapient beings == more interesting conversations, more math proofs, more works of art, so more Fun, but I’m not sure that we would want to uplift chimps if we had the technology to do so.
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk. Is it immoral to leave other species the way they are if transhumanism or FAI take off?
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk.
This seems strange to me. Can you expand on your reasoning? Uplifting seems to me to be potentially a lot simpler. The take level to identify the genes that are most responsible for human intelligence is not that much beyond our current one. And the example species you’ve used, chimps, are close enough to humans that it is likely that for at least some of those genes, simply inserting them into the chimp genome would likely substantially increase their intelligence.
Uplifting seems orders of magnitude easier than uploading at least.
I’ll concede that you are probably right about uplifting being easier.
This was my reasoning:
Properly identifying which gene encodes for what and usefully altering genes to express a particular phenotype as complex as human-level intelligence would require (in any reasonable amount of time) at the least a narrow AI to process and refine the huge amount of data in the half-chromosome or so that separates us from chimps. Chimps are close to humans, yes, but altering their DNA to uplift them seems to me to be the type of problem that would either take years of Manhattan-Project level dedication with the technology we have right now, or some sort of AI to do the heavy lifting for us.
I think I’m way out of my depth here, though, as I don’t know enough about genetic engineering or AI research to know with confidence which would be easier.
If the following is very wrong or morally abhorrent, please correct me rather than downvote. I’m trying to work it out for myself and what I came up with seems intuitively incorrect. It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
So basically, what makes us homo sapiens is our ancestry, but what makes us people is our intelligence. An alien with a brain that somehow worked exactly equivalently to ours would be our equal in every important way, but an alien with a chimp-like intelligence (one that for our purposes would essentially BE a chimp) wouldn’t. It would deserve sympathy, and it would be wrong to hurt it for no reason, but I wouldn’t value an alien-chimp’s life as highly as a human’s or an alien-human. So it seems to me that it follows that the mentally handicapped (if they indeed have chimp-like intelligences) don’t in fact deserve more moral consideration than alien-chimps or earth-chimps (ignoring their families which presumably have normal intelligences and would very much not approve of their use in experiments). If there are no safer ways to get the same results as we do from chimp studies, which I believe to be the case, then it the best option we have for now is to continue studying them. Studying the mentally handicapped would be as bad-but-acceptable but I wouldn’t advocate for it since it would be so unlikely to ever occur. Testing on the mentally handicapped seems very wrong but only for “speciesist” reasons as far as I can tell.
It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
I specified “people mentally handicapped to the point that they are equivalent to chimps.” There’s a lot of ways one can be mentally handicapped.
For the record, I’m a vegetarian. I measure morality based off median suffering/life satisfaction. Intelligence is only valuable insofar as it can improve those metrics, and certain kinds of intelligence probably result in a wider and deeper source of life satisfaction.
I don’t think chimps contribute dramatically to universal flourishing, but I’m not sure that the average human does either. I think that it’s best to have a rule “don’t harm sentient creatures”, but to occasionally turn a blind eye to certain actions that benefit us in the long term.
i.e. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing. Occasionally, doing something like that is necessary for the greater good, but most people who think their actions are sufficiently “greater good” to break the rules are wrong, so we need to discourage it in general.
This is a nice rule in principle, but in practice becomes tough. First, how do we define sentience? Second, what constitutes don’t harm? Is there an action/in-action distinction here? If is it morally unacceptable to let humans in the developing world starve do we have a similar moral obligation to chimps? If not, why not?
. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing
I’m not sure what you are talking about here. Can you expand?
This is a nice rule in principle, but in practice becomes tough.
Oh in practice it’s definitely tough. Optimal morality is tough. I judge myself and other individuals on the efforts they’ve made to improve from the status quo, not on how far they fall short of what they might hypothetically be able to accomplish with infinite computing power.
In my ideal world, suffering doesn’t happen, period, except to the degree that some amount of suffering is necessary bring about certain kinds of happiness. (i.e. everyone, animals included, gets exactly as much as they need, nothing more.
I don’t know to what extent that’s actually possible without accidentally wreaking havoc on the ecosystem and causing all kinds of problems, and in the meantime it’s easier to get public support for helping other humans anyway.
Smallpox
I’m working from old memories from middle school, and referencing what is probably a bit of a “folk version” of the real thing, but my recollection was that Edward Jenner tested his smallpox vaccine on some kid, then gave the kid a full dose of smallpox without his consent.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
I agree it wouldn’t pass muster today, but that may just be because we aren’t facing a disease as deadly as smallpox.
There’s a good moral case for experimenting on somebody without their consent IF:
1) Doing the experiment has a high probability of getting a cure into widespread use quickly
2) Getting consent for an equivalent experiment would be difficult or time-consuming
3) The disease is prevalent and serious enough that a delay to find a consenting subject is a bigger harm than the involuntary experiment.
If you think our moral concern should follow intelligence then it follows that chimps and the mentally handicapped are not morally equal to humans of normal intelligence. Depending how much differing intelligence results in differing moral consideration this could justify chimp and mentally handicapped testing.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified. It is not at all obvious that the mentally handicapped or chimpanzees suffer less than humans of normal intelligence. Nor is it obvious mentally handicapped humans and chimpanzees don’t differ in this regard. But intelligence is almost certainly not the same thing as moral value. There are possibly entities that are very intelligent but for which we would have little moral regard.
Right, that makes sense. I guess if something can suffer and notice it’s suffering and wish it weren’t suffering then it should be as morally valuable as a person...maybe.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified.
I think dogs are “capable of suffering in a way I find morally compelling” though, and I would sacrifice probably a lot of dogs to save myself or another human. Is that just me being heartless?
There are possibly entities that are very intelligent but for which we would have little moral regard.
I mentioned that the hypothetical aliens would have brains that work just like ours, not that they would be just as intelligent.
Your method should be to figure out what it is about humans that makes them morally valuable to you and then see if those traits are found in the same degree elsewhere.
What evidence or arguments can you offer to support the claim that “Much of the knowledge described in Luke’s recent post on the cognitive science of rationality would have been impossible to acquire under such a ban”? I agree that much of the knowledge described in that post was gained through testing on chimpanzees. It doesn’t follow, however, that this knowledge could not have been obtained in ways involving no experimentation on those animals.
Knowledge of brain responses can not currently be obtained in other way than by observing brain responses. If you want the results to apply well to humans, you often have to observe either great apes, or humans. It usually isn’t practical to observe humans, because the restrictions on human experimentation are even tighter.
I don’t quite understand your third point above. Suppose it was true that “Banning chimp testing should thus be done only in conjunction with allowing human testing.” Why are you then opposing the ban on chimp testing, rather than advocating a lift on the ban on human testing?
A. There isn’t a ban on human testing; it’s just very difficult to get approval for anything with any degree of invasiveness.
B. My post says, “Banning chimp testing should thus be done only in conjunction with allowing human testing.” Your question doesn’t make sense as a response to that.
Chimps are morally relevantly similar to human babies and toddlers. Since you defend experimentation on chimps, you should also, I believe, defend experimentation on human babies and toddlers. Do you?
Definitely. We’ve learned a lot of important things about human cognitive development from experiments on human babies and toddlers. These are harmless experiments. (Well, since the 1980s, anyway.) Can I assume, for example, that you oppose allowing someone to show different objects to a baby, and measure which object they spend more time looking at? Because these are among the types of experiments that the editors would like to ban.
Since we also have a strong tendency to rationalize the views that we find ourselves subscribing to, it seems advisable to correct for this potential source of bias by being extra skeptical of arguments that appear to show that animal experimentation is morally permissible.
I agree. And I already do that. Doing so does not imply that you will always conclude that animal experimentation is not morally permissible.
Because these are among the types of experiments that the editors would like to ban.
Would you be okay with a compromise ban that says great apes can be experimented on only in similar circumstances to those we allow for experiments on toddlers?
Phil,
What evidence or arguments can you offer to support the claim that “Much of the knowledge described in Luke’s recent post on the cognitive science of rationality would have been impossible to acquire under such a ban”? I agree that much of the knowledge described in that post was gained through testing on chimpanzees. It doesn’t follow, however, that this knowledge could not have been obtained in ways involving no experimentation on those animals.
I don’t quite understand your third point above. Suppose it was true that “Banning chimp testing should thus be done only in conjunction with allowing human testing.” Why are you then opposing the ban on chimp testing, rather than advocating a lift on the ban on human testing? In the absence of further elaboration, your position smacks of status quo bias.
Chimps are morally relevantly similar to human babies and toddlers. Since you defend experimentation on chimps, you should also, I believe, defend experimentation on human babies and toddlers. Do you?
More generally, I think we should be cautious of endorsing the conclusions that we reach by considering the merits of arguments for and against animal experimentation. As humans, we have a deep-seated bias against members of other species. It is likely that this bias has influenced our views on issues involving widespread use of non-human animals. Since we also have a strong tendency to rationalize the views that we find ourselves subscribing to, it seems advisable to correct for this potential source of bias by being extra skeptical of arguments that appear to show that animal experimentation is morally permissible.
This is only true for a small subset of moralities.
But true for a large subset of Less Wrong posters’ moralities.
Edit: Why downvotes?
You made a statement with undue confidence, and the votes would appear to indicate that at the very least, this large subset is not monitoring this thread.
Last I checked utilitarians of various sorts were pretty common in these parts.
Utilitarian ethics don’t necessarily imply the moral equivalence of a chimp to a toddler. That’s more a question of personhood criteria, which you can easily express in utilitarian or deontological terms.
I do think LWers would be a lot more likely to make that equivalence, but I suspect that’s more because for various reasons we tend to think of personhood mainly in terms of cognition rather than pattern-matching against appearance and behavior.
They aren’t necessarily related but for lots of reasons animal rights is associated with utilitarianism. In particular, utilitarianism tends to recommend a much lower threshold of intelligence for an animal to be due our moral consideration- since the only requirement is experiencing pleasure/pain or having desires. Personhood is usually an epiphenomenal category in utilitarianism- referring to whatever class of entities we should be morally concerned with. It is often an essential category in deontology and it’s confines much stricter- see Kantianism. Utilitarianism and expanding the sphere of moral concern are historically associated as well- Jeremy Bentham, Peter Singer etc. It is not unreasonably to infer from the popularity of utilitarianism here that animal rights is also popular.
Your reason is a good one too, though. And I’m not speaking from total ignorance here, either. I’ve been around these parts for a few years and I’ve seen plenty of upvoted comments about animal rights and had one or two discussions that bare on the subject. Someone is welcome to make a poll but I don’t really think making the observation is worthy of downvotes.
For what it’s worth, I don’t care that much about animal rights; I think humans mostly care about humans; when they care about animals it’s as a side effect of virtues whose primary purpose is to facilitate cooperation and peace between humans (and caring about animals is a good way of signaling those virtues).
(and I don’t think intelligence and “personhood”, whatever that is, have that much to do with each other.)
Hmm. Last I checked, I do care about animals, regardless of signalling—indeed, I often take a bit of a signal-hit when people see me help beetles safely across a street, or spend time cosing up to an orangutan through the glass at the zoo (thereby rendering him less-viewable by the other patrons, even though he’s primarily responding to a familiar presence and displays no interest in anyone else).
There’s very little sense of signalling virtue—I’m not a vegan or vegetarian, I don’t adhere to any religion with specific rules about the treatment of animals; I certainly don’t find it to enhance my cooperation with other people (some people admire it, but an awful lot of them find it a bit weird or kooky).
I learned something new in the process of finding you a bit weird and kooky, and thereby no longer do. So, upvoted.
(I wasn’t sure if beetles even had brains, which seemed somehow relevant to their moral standing, so I looked it up- and what do you know, nociception has been demonstrated in insects.)
(And beetles do have brains. Sort of.)
Yeah, insects have brains. And pain. Many have some degree of personality differentiation, even if the space of possible variance is pretty narrow compared to humans. I certainly can’t prevent most of the insects of the world from experiencing what is, to them, a hideously painful death (and indeed, have sometimes hastened that process for crickets when feeding them to pet mantises), but when I see a little dermestid beetle crawling around where it’ll certainly be hit by a car, my impulse is to save it. To the extent I’m interested in justifying that, it’s that I can make a difference here and now for this organism, and want to do so.
Me, internally: No way that’s true. But, well, just in case...
(five minutes of googling)
I’m learning all sorts of new stuff today!
That sounds like a perfectly valid reason to me.
I’m a bit fuzzy about what counts as signalling and what doesn’t, but I think it covers more cases than those involving conscious planning.
But anyway, I’d say you care about about animals because you’re a kind person, but that humans tend to be kind mostly because evolutionarily it’s been a benefit by facilitating cooperation and reciprocation. I don’t know whether evolution just implemented “be kind to everything” instead of just humans because it took less lines of code (kindness to animals as spandrel), or whether kindness to animals was deliberately implemented because of it’s signaling value (it may not be hard-coded, but just learnt as children).
(For what it’s worth, I tend to save small bugs and throw them out of the window instead of killing them, which my wife would prefer. This device is convenient for safely and easily catching bugs, and observing them!)
I have misunderstood your initial comment—it sounded to me like you were saying humans don’t really care about animals, but often find it desireable to signal that they do. Thanks for clarifying!
About 35%, two years ago.
True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
Along similar lines I was going to propose that it should be considered moral to test on “Low IQ Jocks as soon as they finish High School”. After all they have finished their glory years and are different to me in similar ways to how I am different to a chimpanzee. But I decided not to post because I decided it was dangerous to go anywhere near a space including “different” and “less moral consideration”.
I agree that it’s dangerous, but I think any remotely productive use of this thread is going to have to. If we’re not asking that question, we’re not asking the right questions.
I do think chimps and humans are different; but most members of PETA probably believe they are more different than I do. I think you’re reading positions into my post that aren’t there.
I advocated alternatively testing on humans like myself.
I apologize, I was focusing on a lot of the comments and missed that you had made that point.
I don’t currently know what the rules are for human testing. I think it should be theoretically possible for humans to submit themselves for whatever testing they want, but I also think that as soon as that market exists, there will be those who attempt to exploit it in ways I’d consider unethical. That’s a complex issue that I don’t have an opinion on yet.
It is not clear to me how that avoids the issue of including the future.
It avoids the issue of including the future of particular people. Some people care about that, others don’t, but it reduces the range of reasons you might object to the comparison.
From what I know, I personally weight chimps as maybe 1⁄3 times as morally significant as humans. I’m sometimes willing to sacrifice humans to save other humans, and I’d sacrifice a chimp to save about 1⁄3 as many humans. (I’d also sacrifice a human to save 3x as many chimps). This is mostly an intuitive belief. I can imagine myself changing the number to something as low as 1/10th, maybe even as low as 1/100th (I don’t expect to drop it that far).
It’s important to note, though, that I DON’T sacrifice humans on a 1-for-1 trade off without their consent. I don’t want to live in a world where someone can sacrifice me without me having a say in the matter. There may be cases where I’m willing to consent to sacrifice. I’m not sure if I can identify them right now.
There are still circumstances where, while pissed, I’d grudgingly accept that the Mastermind doing the sacrificing was right to do so. (If they had to divert a train that was going to kill a lot of people, for example. Probably more than 5 though). The number of lives saved to be worth it also has to consider how perfect the information is, and the likelihood that the sacrificer isn’t running on damaged hardware.
So theoretically, I’m okay with sacrificing chimps to save arbitrarily large numbers of people, but because the chimps CAN’T consent, I’d have to be willing to sacrifice somewhere between 1⁄3 and 1/10th as many humans to accomplish the same thing.
I read your post and tried to come up with an ‘exchange rate’ of my own, and it was much more difficult to do than I thought it would be before I tried it. I thought that it would be along the lines of thousands/hundreds of thousands of chimps == 1 human, as I couldn’t conceive of letting one human die in exchange for any smaller number of chimps, but then I realized that it would be much easier to think of dead chimps as an opportunity cost, and was just reacting with instinctual revulsion. This is assuming that dead chimps can’t be used (to the same extent) as live chimps to aid in medical research.)
So, what is the current value that we place on the life of a chimp? If after m (successful) studies each using n chimps, we can save l human lives, then (assuming in worst-case that each study kills n chimps): (mn)(The value of a chimp life in utilons) = l(The value of a human life in utilons) So: (mn)/l = The value of a human life/The value of a chimp life
This estimate is going to be higher than in real life, as we don’t kill all the chimps used in a typical study. The difficulty would be in quantifying the number of studies necessary to save a human life, or the number of lives saved by a particular discovery.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
Have you corrected for your estimate of p(chimps are uplifted in the next fifty years)?
Edit: Okay, if it makes a difference I only realized the Planet of the Apes reference after I posted, I was making a serious point about the difference between human toddlers and chimps as it relates to the possibility of future personhood.
I hadn’t considered the possibility that chimps could/would be uplifted in the near future (50 years or mean chimp lifetime is a good rule of thumb); I think it’s entirely possible that the technology would be there, but I don’t understand the motivation for wanting to uplift chimps. I guess the reasoning is that more sapient beings == more interesting conversations, more math proofs, more works of art, so more Fun, but I’m not sure that we would want to uplift chimps if we had the technology to do so.
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk. Is it immoral to leave other species the way they are if transhumanism or FAI take off?
This seems strange to me. Can you expand on your reasoning? Uplifting seems to me to be potentially a lot simpler. The take level to identify the genes that are most responsible for human intelligence is not that much beyond our current one. And the example species you’ve used, chimps, are close enough to humans that it is likely that for at least some of those genes, simply inserting them into the chimp genome would likely substantially increase their intelligence.
Uplifting seems orders of magnitude easier than uploading at least.
I’ll concede that you are probably right about uplifting being easier.
This was my reasoning: Properly identifying which gene encodes for what and usefully altering genes to express a particular phenotype as complex as human-level intelligence would require (in any reasonable amount of time) at the least a narrow AI to process and refine the huge amount of data in the half-chromosome or so that separates us from chimps. Chimps are close to humans, yes, but altering their DNA to uplift them seems to me to be the type of problem that would either take years of Manhattan-Project level dedication with the technology we have right now, or some sort of AI to do the heavy lifting for us.
I think I’m way out of my depth here, though, as I don’t know enough about genetic engineering or AI research to know with confidence which would be easier.
[Edited for typos.]
If the following is very wrong or morally abhorrent, please correct me rather than downvote. I’m trying to work it out for myself and what I came up with seems intuitively incorrect. It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
So basically, what makes us homo sapiens is our ancestry, but what makes us people is our intelligence. An alien with a brain that somehow worked exactly equivalently to ours would be our equal in every important way, but an alien with a chimp-like intelligence (one that for our purposes would essentially BE a chimp) wouldn’t. It would deserve sympathy, and it would be wrong to hurt it for no reason, but I wouldn’t value an alien-chimp’s life as highly as a human’s or an alien-human. So it seems to me that it follows that the mentally handicapped (if they indeed have chimp-like intelligences) don’t in fact deserve more moral consideration than alien-chimps or earth-chimps (ignoring their families which presumably have normal intelligences and would very much not approve of their use in experiments). If there are no safer ways to get the same results as we do from chimp studies, which I believe to be the case, then it the best option we have for now is to continue studying them. Studying the mentally handicapped would be as bad-but-acceptable but I wouldn’t advocate for it since it would be so unlikely to ever occur. Testing on the mentally handicapped seems very wrong but only for “speciesist” reasons as far as I can tell.
I specified “people mentally handicapped to the point that they are equivalent to chimps.” There’s a lot of ways one can be mentally handicapped.
For the record, I’m a vegetarian. I measure morality based off median suffering/life satisfaction. Intelligence is only valuable insofar as it can improve those metrics, and certain kinds of intelligence probably result in a wider and deeper source of life satisfaction.
I don’t think chimps contribute dramatically to universal flourishing, but I’m not sure that the average human does either. I think that it’s best to have a rule “don’t harm sentient creatures”, but to occasionally turn a blind eye to certain actions that benefit us in the long term.
i.e. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing. Occasionally, doing something like that is necessary for the greater good, but most people who think their actions are sufficiently “greater good” to break the rules are wrong, so we need to discourage it in general.
This is a nice rule in principle, but in practice becomes tough. First, how do we define sentience? Second, what constitutes don’t harm? Is there an action/in-action distinction here? If is it morally unacceptable to let humans in the developing world starve do we have a similar moral obligation to chimps? If not, why not?
I’m not sure what you are talking about here. Can you expand?
Oh in practice it’s definitely tough. Optimal morality is tough. I judge myself and other individuals on the efforts they’ve made to improve from the status quo, not on how far they fall short of what they might hypothetically be able to accomplish with infinite computing power.
In my ideal world, suffering doesn’t happen, period, except to the degree that some amount of suffering is necessary bring about certain kinds of happiness. (i.e. everyone, animals included, gets exactly as much as they need, nothing more.
I don’t know to what extent that’s actually possible without accidentally wreaking havoc on the ecosystem and causing all kinds of problems, and in the meantime it’s easier to get public support for helping other humans anyway.
I’m working from old memories from middle school, and referencing what is probably a bit of a “folk version” of the real thing, but my recollection was that Edward Jenner tested his smallpox vaccine on some kid, then gave the kid a full dose of smallpox without his consent.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
I agree it wouldn’t pass muster today, but that may just be because we aren’t facing a disease as deadly as smallpox.
There’s a good moral case for experimenting on somebody without their consent IF: 1) Doing the experiment has a high probability of getting a cure into widespread use quickly 2) Getting consent for an equivalent experiment would be difficult or time-consuming 3) The disease is prevalent and serious enough that a delay to find a consenting subject is a bigger harm than the involuntary experiment.
Agreed.
Unless they have it coming! I consider it unethical to not harm sentient creatures in certain circumstances.
If you think our moral concern should follow intelligence then it follows that chimps and the mentally handicapped are not morally equal to humans of normal intelligence. Depending how much differing intelligence results in differing moral consideration this could justify chimp and mentally handicapped testing.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified. It is not at all obvious that the mentally handicapped or chimpanzees suffer less than humans of normal intelligence. Nor is it obvious mentally handicapped humans and chimpanzees don’t differ in this regard. But intelligence is almost certainly not the same thing as moral value. There are possibly entities that are very intelligent but for which we would have little moral regard.
Right, that makes sense. I guess if something can suffer and notice it’s suffering and wish it weren’t suffering then it should be as morally valuable as a person...maybe.
I think dogs are “capable of suffering in a way I find morally compelling” though, and I would sacrifice probably a lot of dogs to save myself or another human. Is that just me being heartless?
I mentioned that the hypothetical aliens would have brains that work just like ours, not that they would be just as intelligent.
Your method should be to figure out what it is about humans that makes them morally valuable to you and then see if those traits are found in the same degree elsewhere.
I agree.
Knowledge of brain responses can not currently be obtained in other way than by observing brain responses. If you want the results to apply well to humans, you often have to observe either great apes, or humans. It usually isn’t practical to observe humans, because the restrictions on human experimentation are even tighter.
A. There isn’t a ban on human testing; it’s just very difficult to get approval for anything with any degree of invasiveness.
B. My post says, “Banning chimp testing should thus be done only in conjunction with allowing human testing.” Your question doesn’t make sense as a response to that.
Definitely. We’ve learned a lot of important things about human cognitive development from experiments on human babies and toddlers. These are harmless experiments. (Well, since the 1980s, anyway.) Can I assume, for example, that you oppose allowing someone to show different objects to a baby, and measure which object they spend more time looking at? Because these are among the types of experiments that the editors would like to ban.
I agree. And I already do that. Doing so does not imply that you will always conclude that animal experimentation is not morally permissible.
Let me ask you a question: Do you ever eat pork?
Would you be okay with a compromise ban that says great apes can be experimented on only in similar circumstances to those we allow for experiments on toddlers?