Having lost parents and grandparents in the last several years, I appreciate your sentiment. But, as much as I would want to live forever, I am not sure that eternal individual life is good for humanity as a whole, at least without some serious mind hacking first. Many other species, like, say, salmon, have a fixed lifespan, so intelligent salmon would probably not worry about individual immortality. It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without. That said, I agree that suffering and premature death probably has no advantage for the species as a whole and ought to be eliminated, but I cannot decide for sure if fixed lifespan is such a bad idea.
I actually mostly agree with you. Or at least, that the answer is not terribly obvious. I didn’t expound upon it during the ceremony (partly due to time, and partly because one of the most important aspects of the moment was to give a time for anti-deathists to grieve for people they lost, who’s death they were unable to process among peers who shared their beliefs.)
But in the written up version here, I thought it was important that I make my views clear, and included the bit about me not actually being that much of an anti-deathist. I think the current way people die is clearly supoptimal, and once you remove it as an anchor I’m not sure if people should die after 100 years, a thousand years, or longer or at all. But I don’t think it’s as simple an idea as “everybody gets to live forever.”
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
In any case that seems to me to be much more obvious than “we (for some value of ‘we’) decide, for all of humanity, how long everyone gets to live”.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely. And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
Additionally — this is re: shminux’s comment, but is related to the overall point — “Good for humanity as a whole” and “advantage for the species as a whole” seem like nonsensical concepts in this context. Humanity is just the set of all humans. There’s no such thing as a nebulous “good for humanity” that’s somehow divorced from what’s good for any or every individual human.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn’t inherently self-destructive.
I think I agree with the spirit of your answer. “We can’t possibly figure out how to do that and in any case doing so wouldn’t feel right, so we’ll let the people involved sort it out amongst themselves.,” but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you’ll ever be able to really design a system for all humanity.
And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
To you, perhaps. Well, and me. You’re intuitions on the matter are not universal, however. Far from it, as our friends’s comments show.
My main problems (read: ones that don’t rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that’s all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you’d get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I’d end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is “fair and stable”, not “optimal”. I’m not arguing that that’s how things should work — and I can easily imagine ways to change it that I’d view as improvements — but it’s a nice simple model of how things could be stable.
Well, that model may be stable (I haven’t actually thought it through sufficiently to judge, but let’s grant that it is) — but how exactly is it “fair”? I mean, you’re assuming a set of values which is nowhere near universal in humanity, even. I’m really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
Those rules are not, in fact, descriptive of how the universe works (or else we wouldn’t be having this discussion). Do you think they should be?
If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)
For instance, you say that “each living entity would get to have” so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are… a few… moral issues here. Perhaps we’ll just kill people at some age?
What I am getting at is that you can’t just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
You seem to be implying that designed death is worse. How do you figure?
I don’t. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls.
A statement like that needs a mathematical proof.
If it’s a terminal value for most people to suffer and grieve over the loss of individual life
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
I’m really curious to know what you mean by ‘terminal meta-values’. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
No, I’m perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it’s likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people’s distress when you kill them, in which case remind me never to drink anything you give me.
… are you saying I’m foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn’t realize? Yes? Congratulations, you’re not a psychopath.
Tell you what. Instead of typing out the answer to that, I’m going to respond with a question: how do you* think people who join the military justify the fact that they will probably either kill or aid others in killing?
*(I do have an answer in mind, and I will post it, even if your response refutes it.)
I think they have many different justifications depending on the person, ranging from “it’s a necessary evil” to “I need to pay for college and can hopefully avoid getting into battle” to “only the lives of my own countrymen matter”, just like people can have many different justifications for why they’d approve modifying the terminal values of others.
No. Something can be bad without being worse than the other options, and people can be mistaken about whether something an action will kill people. This is quite separate from actually having no term for human life in their utility function.
There’s an important difference between “not bad” and “bad but justifiable under some circumstances”. I don’t think believers in abortion, execution or war believe that killing per se is morally neutral. Each of those three has its justification.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people.
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he’s actually a psychopath or, indeed, an actual salmon-person (those are still technically “human”, I assume.)
Neurotypical, that’s the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).
If your utility function differs, it is wrong, while theirs is right. (I’d throw in some reference to a divine calibration, but that would be overly sarcastic.)
I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing “neurotypical” is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?
Are you accusing me of somehow discriminating against agents that implement other utility functions?
Yes, here’s your last reply to me on just that topic:
Except that humans share a utility function, which doesn’t change. (...) Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say “this is wrong” just was we can say “this is a chair” or “there are five apples”.
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath [emphasis mine], and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt.
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
I would value the suffering of my child as more important than the suffering of your child.
That seems … kind of evil, to be honest.
Are most all parents “evil” in that regard?
I believe the technical term is “biased”.
My assertion is that all humans share utility—which is the standard assumption in ethics, and seems obviously true
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
(Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
Possible outcome; better than most; boring. I don’t think that’s really something to strive for, but my values are not yours, I guess. Also, I’m assuming we’re just taking whether an outcome is desirable into account, not its probability of actually coming about.
I don’t. Suffering is bad, finite individual existence is not necessarily so.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I’m just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.
Having lost parents and grandparents in the last several years, I appreciate your sentiment. But, as much as I would want to live forever, I am not sure that eternal individual life is good for humanity as a whole, at least without some serious mind hacking first. Many other species, like, say, salmon, have a fixed lifespan, so intelligent salmon would probably not worry about individual immortality. It seems to me that associating natural death of an individual with evil is one of those side effects of evolution humans could do without. That said, I agree that suffering and premature death probably has no advantage for the species as a whole and ought to be eliminated, but I cannot decide for sure if fixed lifespan is such a bad idea.
I actually mostly agree with you. Or at least, that the answer is not terribly obvious. I didn’t expound upon it during the ceremony (partly due to time, and partly because one of the most important aspects of the moment was to give a time for anti-deathists to grieve for people they lost, who’s death they were unable to process among peers who shared their beliefs.)
But in the written up version here, I thought it was important that I make my views clear, and included the bit about me not actually being that much of an anti-deathist. I think the current way people die is clearly supoptimal, and once you remove it as an anchor I’m not sure if people should die after 100 years, a thousand years, or longer or at all. But I don’t think it’s as simple an idea as “everybody gets to live forever.”
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
In any case that seems to me to be much more obvious than “we (for some value of ‘we’) decide, for all of humanity, how long everyone gets to live”.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely. And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
Additionally — this is re: shminux’s comment, but is related to the overall point — “Good for humanity as a whole” and “advantage for the species as a whole” seem like nonsensical concepts in this context. Humanity is just the set of all humans. There’s no such thing as a nebulous “good for humanity” that’s somehow divorced from what’s good for any or every individual human.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn’t inherently self-destructive.
I think I agree with the spirit of your answer. “We can’t possibly figure out how to do that and in any case doing so wouldn’t feel right, so we’ll let the people involved sort it out amongst themselves.,” but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you’ll ever be able to really design a system for all humanity.
To you, perhaps. Well, and me. You’re intuitions on the matter are not universal, however. Far from it, as our friends’s comments show.
My main problems (read: ones that don’t rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that’s all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you’d get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I’d end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is “fair and stable”, not “optimal”. I’m not arguing that that’s how things should work — and I can easily imagine ways to change it that I’d view as improvements — but it’s a nice simple model of how things could be stable.
Well, that model may be stable (I haven’t actually thought it through sufficiently to judge, but let’s grant that it is) — but how exactly is it “fair”? I mean, you’re assuming a set of values which is nowhere near universal in humanity, even. I’m really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
Those rules are not, in fact, descriptive of how the universe works (or else we wouldn’t be having this discussion). Do you think they should be?
If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)
For instance, you say that “each living entity would get to have” so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are… a few… moral issues here. Perhaps we’ll just kill people at some age?
What I am getting at is that you can’t just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.
Note: Not trying to attack your position, just curious.
Fixed by whom, might I ask?
You seem to be implying that designed death is worse. How do you figure?
Superhappy aliens, FAI, United Nations… There are multiple possibilities. One is that you stay healthy for, say, 100 years, then spawn once blissfully and stop existing (salmon analogy). Humans’ terminal values are adjusted in a way that they don’t strive for infinite individual lifespan.
I don’t. Suffering is bad, finite individual existence is not necessarily so.
No proposal that includes these words is worth considering. There’s no Schelling point between forcing people to die at some convenient age and be happy and thankful about it, and just painting smiles on everyone’s souls. That’s literally what terminal values are all about; you can only trade off between them, not optimize them away whenever it would seem expedient to!
If it’s a terminal value for most people to suffer and grieve over the loss of individual life—and they want to suffer and grieve, and want to want to—a sensible utilitarian would attempt to change the universe so that the conditions for their suffering no longer occur, instead of messing with this oh-so-inconvenient, silly, evolution-spawned value. Because if we were to mess with it, we’d be messing with the very complexity of human values, period.
I agree with what you’re saying, but just to complicate things a bit: what if humans have two terminal values that directly conflict? Would it be justifiable to modify one to satisfy the other, or would we just have to learn to live with the contradiction? (I honestly don’t know what I think.)
Ah… If you or I knew what to think, we’d be working on CEV right now, and we’d all be much less fucked than we currently are.
A statement like that needs a mathematical proof.
“If” indeed. There is little “evolution-spawned” about it (not that it’s a good argument to begin with, trusting the “blind idiot god”), a large chunk of this is cultural. If you dig a bit deeper into the reasons why people mourn and grieve, you can usually find more sensible terminal values. Why don’t you give it a go.
If human terminal values need to be adjusted for this to be acceptable to them, then it is immoral by definition.
Looks like you and I have different terminal meta-values.
I’m really curious to know what you mean by ‘terminal meta-values’. Would you mind expanding a bit, or pointing me in the direction of a post which deals with these things?
Say, whether it is ever acceptable to adjust someone’s terminal values.
No, I’m perfectly OK with adjusting terminal values in certain circumstances. For example, turning a Paperclipper into an FAI is obviously a good thing.
EDIT: Of course, turning an FAI into a Paperclipper is obviously a bad thing, because instead of having another agent working towards the greater good, we have an agent working towards paperclips, which is likely to get in the way at some point. Also, it’s likely to feel sad when we have to stop it turning people into paperclips, which is a shame.
Unless you own a time machine and come from a future where salmon-people rule the earth, I seriously doubt that. If you’re a neurotypical human, then you terminally value not killing people. Mindraping them into doing it themselves continues to violate this preference, unless all you actually care about is people’s distress when you kill them, in which case remind me never to drink anything you give me.
Typical mind fallacy?
… are you saying I’m foolish to assume that you value human life? Would you, in fact, object to killing someone if they wouldn’t realize? Yes? Congratulations, you’re not a psychopath.
Everyone who voluntarily joins the military is a psychopath?
Tell you what. Instead of typing out the answer to that, I’m going to respond with a question: how do you* think people who join the military justify the fact that they will probably either kill or aid others in killing?
*(I do have an answer in mind, and I will post it, even if your response refutes it.)
I think they have many different justifications depending on the person, ranging from “it’s a necessary evil” to “I need to pay for college and can hopefully avoid getting into battle” to “only the lives of my own countrymen matter”, just like people can have many different justifications for why they’d approve modifying the terminal values of others.
So, despite the downvotes that bought me …
I said “non-psychopaths consider killing a Bad Thing.”
You said “But what about people who join the army?”
I said “What do you think?”
You said “I think they justify it as saving more lives than it kills, or come up with reasons it’s not really killing people”
I think this conversation is over, don’t you?
Do you see my point that there are plenty of ways by which somebody can consider killing as not-so-bad, without needing to be a psychopath?
No. Something can be bad without being worse than the other options, and people can be mistaken about whether something an action will kill people. This is quite separate from actually having no term for human life in their utility function.
There’s an important difference between “not bad” and “bad but justifiable under some circumstances”. I don’t think believers in abortion, execution or war believe that killing per se is morally neutral. Each of those three has its justification.
I believe abortion is morally neutral, at least for the first few months and probably more.
But I said “killing per se”.
“Neurotypical”… almost as powerful as True!
Seems like a perfectly functional Schelling point to me. Besides, I needed a disclaimer for the possibility that he’s actually a psychopath or, indeed, an actual salmon-person (those are still technically “human”, I assume.)
Neurotypical, that’s the tyranny of some supposedly existing elusive majority which has always (ever since living on trees) and will always (when colonizing the Canis Major Dwarf Galaxy) terminally value essentially the same things (such as strawberry ice cream, not killing people).
If your utility function differs, it is wrong, while theirs is right. (I’d throw in some reference to a divine calibration, but that would be overly sarcastic.)
I may be confused by the sarcasm here. Could you state your objection more clearly? Are you arguing “neurotypical” is not a useful concept? Are you accusing me of somehow discriminating against agents that implement other utility functions? Are you objecting to my assertion that creating an agent with a different utility function is usually instrumentally bad, because it is likely to attempt to implement that utility function to the exclusion of yours?
Yes, here’s your last reply to me on just that topic:
Also:
It is bizarre to me how you believe there is some shared objective morality—“underneath”—that is correct because it is “typical” (hello fallacious appeal to majority), and that outliers that have a different utility function have false values.
Even if there are shared elements (even across e.g. large vague categories such as Chinese values and Western values), such as surmised by CEV_humankind (probably an almost empty set), that does not make anyone’s own morality/value function wrong, it merely makes it incongruent with the current cultural majority views. Hence the “tyranny of some supposedly existing elusive majority”.
Bloody hell, it’s you again. I hadn’t noticed I was talking to the same person I had that argument with. I guess that information does add some context to your comment.
I’m not saying they’re wrong, except when “wrong” is defined with reference to standard human values (which is how I, and many others on LW, commonly use the term.) I am saying their values are not my values, or (probably) your values. That’s not to say they don’t have moral worth or anything, just that giving them (where “them” means salmon people, clippies or garden-variety psychopaths) enough power will result in them optimizing the universe for their own goals, not ours.
Of course, I’m not sure how you judge moral arguments, so maybe I’m assuming some common prior or something I shouldn’t be.
Your comment of just saying “well, this is the norm” does not fit with your previously stated views, see this exchange:
So if the majority of humans values the lives of their close family circle higher than random other human lives—those are the standard human values, the norm—then you still call those evil or biased, because they don’t agree with your notion of what standard human values should be, based on “obviously true” ethical assumptions. *
Do you see the cognitive dissonance? (Also, you’re among the first—if not the only—commenters on LW who I’ve seen using even just “standard human values” as an ought, outside the context of CEV—a different concept—for FAI.)
* It fits well with some divine objective morality, however it does not fit well with some supposed and only descriptive, not prescriptive “standard human values” (not an immutable set in itself, you probably read Harry’s monologue on shifting human values through the ages in the recent HPMOR chapter).
I’m asserting the values you describe are not, in fact, the standard human values. If it turned out that parents genuinely have different values to other people, then they wouldn’t be biased (down to definitions on “evil”.)
We are both agents with human ethics. When I say we “ought” to do something, I mean by the utility function we both share. If I were a paperclipper, I would need separate terms for my ethics and yours. But then, why would I help you implement values that oppose my own?
It comes down to “I value this human over that other human” being a part of your utility function, f(this.human) > f(that.human). [Syntactical overloading for comedic relief] A bias is something affecting your cognition—how you process information, not what actions you choose based upon that processing. While you can say “your values are biased towards X”, that is using the term in a different than the usual LW context.
In particular, I doubt you’ll find more than 1 in a million humans who would not value some close relative’s / friend’s / known person’s life over a randomly picked human life (“It could be anything! It could even be another boat!”).
You have here a major, major part of the utility function of a majority of humans (throughout history! in-group > out-group), yet you persist on calling that an evil bias. Why, because it does not fit with what the “standard human values” should be? What god intended? Or is there no religious element to your position at all? If so, please clarify.
You realize that most humans value eating meat, right? Best pick up that habit, no? ;)
I just realized I never replied to this. I definitely meant to. Must have accidentally closed the tab before clicking “comment”.
No. I believe they are mostly misinformed regarding animal intelligence and capacity for pain, conditions in slaughterhouses and farms etc.
[Edited as per Vaniver’s comment below]
I really don’t think it’s a stretch to say that they value eating meat, even if only as an instrumental means for valuing tastiness and healthiness. Even beyond eating meat, it appears that a significant subset of humans (perhaps most?) enjoy hunting animals, suggesting that could be a value up for consideration.
And even if they do a tradeoff between the value of eating meat and the value of not inflicting suffering, that doesn’t mean they don’t have the value of eating meat. Policy debates should not appear one-sided.
You’re talking about humans alive today? Or all humans who’ve ever lived? I’d be extremely surprised if more than 50% of the former had hunted and enjoyed it. (And, considering that approximately half the humans are female, I would be somewhat surprised about the latter as well.)
So, by “enjoy hunting” I mean more “after going hunting, would enjoy it” than “have gone hunting and enjoyed it.” In particular, I suspect that a non-hunter’s opinion on hunting is probably not as predictive of their post-hunting experience as they would imagine that it would be. It is not clear to me if the percentage of women who would enjoy hunting is smaller than the percentage of men who would not.
Be careful with that kind of arguments, for the same is probably true of heroin. (Yes, there are huge differences between hunting and heroin, but still...)
Dammit, I was literally about to remove that claim when you posted this :(
Possible outcome; better than most; boring. I don’t think that’s really something to strive for, but my values are not yours, I guess. Also, I’m assuming we’re just taking whether an outcome is desirable into account, not its probability of actually coming about.
Did you arrive at this from logical extrapolation of your moral intuitions, or is this the root intuition? At this point I’m just curious to see how your moral values differ from mine.
Good question. Just looking at some possible worlds where individual eternal life is less optimal than finite life for the purposes of species survival. Yet where personal death is not a cause of individual anguish and suffering.