Continuing from my discussion with whpearson because it became offtopic.
whpearson, could you expand on your values and the reasons they are that way? Can you help me understand why you’d sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity? Do you explicitly value the survival of humanity, or just the utility functions of other humans?
Regarding science, I certainly value it a lot, but not to the extent of welcoming a war here & now just to get some useful spin-offs of military tech in another decade.
That’s just it. The amygdala is only good for protecting the people around you. It doesn’t know about ‘survival of humanity’. To the amygdala, a million deaths is just a statistic.
Note my question for whpearson: would you kill all the people around you, friends and family, hurting them face-to-face, and finally kill yourself, if it were to increase the chance of survival of the rest of humanity? whpearson said yes, he would. But he’d be working against his amygdala to do so.
I’m not sure that I can’t generalize the experience of empathy to apply to people whose faces I can’t see. They don’t have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can’t succeed in empathizing with these people per se, I don’t know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.
The fact that you never will have to experience this doesn’t mean those children won’t experience the fear. Now you can’t make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they’re big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can’t actually be in your utility function.
Getting back to the original issue: since protecting humanity isn’t necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?
From your explanation I gather that you first decided it’s a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it’s just a nice value to signal, since I presume you’ve never had to act on it—far mode thinking, if I remember the term correctly).
Extending empathy to those whom I can’t actually see just seems like the obvious thing to do since the fact that I can’t see their faces doesn’t appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.
So I’m not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.
One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.
Then there is the attempt to find a rationale for this feeling, the number of arguments I have had with myself to give some reason to why I might feel this way. Or at least why it is not a very bad idea to feel this way.
My view of identity is something like the idea of genetic relatedness. If someone made an atom level copy of you, that’d be the same person pretty much right? Because it shares the same beliefs, desires and view point on the world. But most humans share some beliefs and desires. From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some. We are identity kin as well as all sharing lots of the same genetic code (as we do with animals). So even if I die parts of me are in everyone even if not as obvious as they are with my friends. We are all mental descendants of Newton and Einstein and share that heritage. Not all things about humanity (or about me) are to be cherished, so I do not preach universal love and peace. But wiping out humanity would remove all of those spread out bits of me.
Making self-sacrifice easier is the fact I’m not sure that me surviving as a post human will preserve much of my current identity. In some way I hope it doesn’t as I am not psychologically ready for grown up (on the cosmic scale) choices but I wish to be. In other ways I am afraid that things of value will be lost that don’t need to be. But from any view I don’t think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.
I think my friends would also share the same word less emotion to save humanity, but not the odd wordy view of identity I have.
One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.
There are two relevant differences between this and wanting to prevent the extinction of humankind. One is, as I told Jack, that emotions only work for small amounts of people you can see and interact with personally; you can’t really feel the same kind of emotions about humanity.
The other is people have all kinds of irrational, suboptimal, bug-ridden heuristics for taking personal risks; for instance the firefighter might be confident in his ability to survive the fire, even though a lot of the danger doesn’t depend on his actions at all. That’s why I prefer to talk about incurring a certain penalty, like killing one guy to save another, rather than taking a risk.
From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some.
I understand this as a useful rational model, but I confess I can’t identify with this way of thinking at all on an emotional level.
What importance do you attach to actually being you (the subjective thread of experience)? Would you sacrifice your life to save the lives of two atomically precise copies of you that were created a minute ago? If not two, how many? In fact, how could you decide on a precise number?
But from any view I don’t think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.
Personal continuity, in the sense of subjective experience, matters very much to me. In fact it probably matters more than the rest of the universe put together.
If Omega offered me great riches and power—or designing a FAI singleton correctly, or anything I wanted—at the price of losing my subjective experience in some way (which I define to be much the same as death, on a personal level) - then I would say no. How about you?
Continuing from my discussion with whpearson because it became offtopic.
whpearson, could you expand on your values and the reasons they are that way? Can you help me understand why you’d sacrifice the life of yourself and your friends for an increased chance of survival for the rest of humanity? Do you explicitly value the survival of humanity, or just the utility functions of other humans?
Regarding science, I certainly value it a lot, but not to the extent of welcoming a war here & now just to get some useful spin-offs of military tech in another decade.
Not directed at me, but since this is a common view… I don’t think you’re question takes an argument as its answer.
This is why. If you don’t want to protect people you don’t know then you and I have different amygdalas.
Whpearson can come up with reasons why we’re all the same but if you don’t feel it those reasons won’t be compelling.
That’s just it. The amygdala is only good for protecting the people around you. It doesn’t know about ‘survival of humanity’. To the amygdala, a million deaths is just a statistic.
Note my question for whpearson: would you kill all the people around you, friends and family, hurting them face-to-face, and finally kill yourself, if it were to increase the chance of survival of the rest of humanity? whpearson said yes, he would. But he’d be working against his amygdala to do so.
Good to know you’re not a psychopath, anyway. :-)
I’m not sure that I can’t generalize the experience of empathy to apply to people whose faces I can’t see. They don’t have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can’t succeed in empathizing with these people per se, I don’t know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.
The fact that you never will have to experience this doesn’t mean those children won’t experience the fear. Now you can’t make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they’re big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can’t actually be in your utility function.
Getting back to the original issue: since protecting humanity isn’t necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?
From your explanation I gather that you first decided it’s a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it’s just a nice value to signal, since I presume you’ve never had to act on it—far mode thinking, if I remember the term correctly).
Extending empathy to those whom I can’t actually see just seems like the obvious thing to do since the fact that I can’t see their faces doesn’t appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.
So I’m not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.
There are two possible answers to this.
One is the raw emotion, it seems right in a wordless fashion. Why do people risk their lives to save an unrelated child, as fire fighters do? Saving the human race from extinction seems like the epitome of this ethic.
Then there is the attempt to find a rationale for this feeling, the number of arguments I have had with myself to give some reason to why I might feel this way. Or at least why it is not a very bad idea to feel this way.
My view of identity is something like the idea of genetic relatedness. If someone made an atom level copy of you, that’d be the same person pretty much right? Because it shares the same beliefs, desires and view point on the world. But most humans share some beliefs and desires. From my point of view, that you share some interest or way of thinking with me, makes you a bit of me and vice versa, not a large amount but some. We are identity kin as well as all sharing lots of the same genetic code (as we do with animals). So even if I die parts of me are in everyone even if not as obvious as they are with my friends. We are all mental descendants of Newton and Einstein and share that heritage. Not all things about humanity (or about me) are to be cherished, so I do not preach universal love and peace. But wiping out humanity would remove all of those spread out bits of me.
Making self-sacrifice easier is the fact I’m not sure that me surviving as a post human will preserve much of my current identity. In some way I hope it doesn’t as I am not psychologically ready for grown up (on the cosmic scale) choices but I wish to be. In other ways I am afraid that things of value will be lost that don’t need to be. But from any view I don’t think it matters that much who will become the grown ups. So my own personal continuity through the ages does not seem as important as the survival.
I think my friends would also share the same word less emotion to save humanity, but not the odd wordy view of identity I have.
There are two relevant differences between this and wanting to prevent the extinction of humankind. One is, as I told Jack, that emotions only work for small amounts of people you can see and interact with personally; you can’t really feel the same kind of emotions about humanity.
The other is people have all kinds of irrational, suboptimal, bug-ridden heuristics for taking personal risks; for instance the firefighter might be confident in his ability to survive the fire, even though a lot of the danger doesn’t depend on his actions at all. That’s why I prefer to talk about incurring a certain penalty, like killing one guy to save another, rather than taking a risk.
I understand this as a useful rational model, but I confess I can’t identify with this way of thinking at all on an emotional level.
What importance do you attach to actually being you (the subjective thread of experience)? Would you sacrifice your life to save the lives of two atomically precise copies of you that were created a minute ago? If not two, how many? In fact, how could you decide on a precise number?
Personal continuity, in the sense of subjective experience, matters very much to me. In fact it probably matters more than the rest of the universe put together.
If Omega offered me great riches and power—or designing a FAI singleton correctly, or anything I wanted—at the price of losing my subjective experience in some way (which I define to be much the same as death, on a personal level) - then I would say no. How about you?