I didn’t write that accurately. I should have said:
I only care about people who actually exist (or will exist)
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
I’m curious why people say things like:
“value is subjective and I happen to care only about people who already exist”
“value is subjective and I happen to care only about people who live in the same country as me”
“value is subjective and I happen to care only about my friends and family”
but not:
“value is subjective and I happen to care only about whoever is currently in my field of vision”.
I wrote:
I didn’t write that accurately. I should have said:
I even care about people who potentially will exist in proportion to the probability that they will exist, which really should be included in the term ‘actually’.
So for example, I care that the people who become pregnant next year get good prenatal care for the sake of the children that they will bear the year after (as well as for their own sakes).
However, I don’t care whether they actually become pregnant, or (given that they do) that those children actually are born, except as this affects them and other actual people. All in all, I wish that fewer people became pregnant and fewer babies were born, for various reasons having to do with how this affects other people, although my main emphasis is that women should have the freedom to choose whether to become and remain pregnant. (So in this vein, I donate to Planned Parenthood, and once did volunteer work for them, and may do so again. This also helps with the prenatal care.)
Then is it fair to say that, all else being equal, for people who don’t currently exist, you’re indifferent between them having no life and an OK life, and you’re indifferent between them having no life and a great life, but you prefer them having a great life to an OK life?
This must be a standard problem in utilitarian theory, but I don’t know its name.
In case you haven’t read my comment introducing myself, know that my ultimate social value is freedom, a sort of utilitarian calculus where utility is freedom. So to judge whether someone should live, the main question to ask is whether they want to live. (I forgot to say in my reply to MartinB that of course I am against medical treatment of those who do not wish it.)
But those who do not exist do not wish anything. So it doesn’t matter.
If by ‘a great life’ you mean a life of great freedom, then I prefer that to the alternative life. But one can only judge what such a life actually is once the person actually exists and has wants. I support prenatal care only on the basis of a prediction about what people will want later, like wanting to be healthy.
It still doesn’t hang together mathematically, since I should simply take expected utitlity/freedom. As I also said in my introductory comment, I don’t really believe that any utilitarian calculus captures my values. I can understand decision theory once the utilities are assigned, but I don’t understand how to assign utilities in the first place.
Pretty sure this is just the flip side of the repugnant conclusion http://en.wikipedia.org/wiki/Mere_addition_paradox, which is about whether you should care about average welfare or total welfare.
Thanks, that’s it!
I do say that. I care (in terms of how I actually act) about people I see, people I like, people in my extended networks, and all living people. For example, if someone had a heart attack, I would help them even if rationally, the time I spent could be converted into far more lives through optimal giving.
Sure, but my point is you probably wouldn’t use this example of “caring” as a justification in abstract philosophical debates about, e.g., the ethics of cryonics, because visual-field-dependent morality is absurd enough to make it intuitively reasonable that values you truly care about should hold up to some sort of reflection.
It’s important not to be too loose with the idea of “care in terms of how I actually act”, or you’ll end up saying you care about being near large masses or making hiccup noises. You can plausibly argue that falling and hiccups aren’t behavior in the way that helping someone with a heart attack is, but it’s not like there’s a bright dividing line.
You know the “extended mind” hypothesis that says things like calculators or search engines can in some circumstances be seen as parts of your mind? It seems like the flip side of that is an “abridged mind” hypothesis where some parts of your brain are like alien mind control lasers, except located in your skull.
Well, yes. I have a reflectively endorsed belief that being an altruist is good and proper. If I were to endorse selfishness, I would include exceptions for those categories, in increasing order of affect on my decisions.
If value is subjective, there’s nothing particularly odd about saying the first things but not the second. That’s just their subjective preference.
Because that’s not really how humans work. We care more about things right in front of us, but we don’t stop caring about someone just because they’re not in our field of vision, and we don’t necessarily start caring about anyone who is.
So imagine that I said “to a substantial extent”.
Sure, but there are things close enough to what I said that are true but that would have been more of a pain to write down.